WorldWideScience

Sample records for delaunay triangulation algorithm

  1. A Sweepline Algorithm for Generalized Delaunay Triangulations

    DEFF Research Database (Denmark)

    Skyum, Sven

    We give a deterministic O(n log n) sweepline algorithm to construct the generalized Voronoi diagram for n points in the plane or rather its dual the generalized Delaunay triangulation. The algorithm uses no transformations and it is developed solely from the sweepline paradigm together...

  2. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  3. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  4. Constructing Delaunay triangulations along space-filling curves

    NARCIS (Netherlands)

    Buchin, K.; Fiat, A.; Sanders, P.

    2009-01-01

    Incremental construction con BRIO using a space-filling curve order for insertion is a popular algorithm for constructing Delaunay triangulations. So far, it has only been analyzed for the case that a worst-case optimal point location data structure is used which is often avoided in implementations.

  5. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  6. A Novel Model of Conforming Delaunay Triangulation for Sensor Network Configuration

    Directory of Open Access Journals (Sweden)

    Yan Ma

    2015-01-01

    Full Text Available Delaunay refinement is a technique for generating unstructured meshes of triangles for sensor network configuration engineering practice. A new method for solving Delaunay triangulation problem is proposed in this paper, which is called endpoint triangle’s circumcircle model (ETCM. As compared with the original fractional node refinement algorithms, the proposed algorithm can get well refinement stability with least time cost. Simulations are performed under five aspects including refinement stability, the number of additional nodes, time cost, mesh quality after intruding additional nodes, and the aspect ratio improved by single additional node. All experimental results show the advantages of the proposed algorithm as compared with the existing algorithms and confirm the algorithm analysis sufficiently.

  7. A Delaunay Triangulation Approach For Segmenting Clumps Of Nuclei

    International Nuclear Information System (INIS)

    Wen, Quan; Chang, Hang; Parvin, Bahram

    2009-01-01

    Cell-based fluorescence imaging assays have the potential to generate massive amount of data, which requires detailed quantitative analysis. Often, as a result of fixation, labeled nuclei overlap and create a clump of cells. However, it is important to quantify phenotypic read out on a cell-by-cell basis. In this paper, we propose a novel method for decomposing clumps of nuclei using high-level geometric constraints that are derived from low-level features of maximum curvature computed along the contour of each clump. Points of maximum curvature are used as vertices for Delaunay triangulation (DT), which provides a set of edge hypotheses for decomposing a clump of nuclei. Each hypothesis is subsequently tested against a constraint satisfaction network for a near optimum decomposition. The proposed method is compared with other traditional techniques such as the watershed method with/without markers. The experimental results show that our approach can overcome the deficiencies of the traditional methods and is very effective in separating severely touching nuclei.

  8. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  9. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  10. Visualization research of 3D radiation field based on Delaunay triangulation

    International Nuclear Information System (INIS)

    Xie Changji; Chen Yuqing; Li Shiting; Zhu Bo

    2011-01-01

    Based on the characteristics of the three dimensional partition, the triangulation of discrete date sets is improved by the method of point-by-point insertion. The discrete data for the radiation field by theoretical calculation or actual measurement is restructured, and the continuous distribution of the radiation field data is obtained. Finally, the 3D virtual scene of the nuclear facilities is built with the VR simulation techniques, and the visualization of the 3D radiation field is also achieved by the visualization mapping techniques. It is shown that the method combined VR and Delaunay triangulation could greatly improve the quality and efficiency of 3D radiation field visualization. (authors)

  11. A complete solution of cartographic displacement based on elastic beams model and Delaunay triangulation

    Science.gov (United States)

    Liu, Y.; Guo, Q.; Sun, Y.

    2014-04-01

    In map production and generalization, it is inevitable to arise some spatial conflicts, but the detection and resolution of these spatial conflicts still requires manual operation. It is become a bottleneck hindering the development of automated cartographic generalization. Displacement is the most useful contextual operator that is often used for resolving the conflicts arising between two or more map objects. Automated generalization researches have reported many approaches of displacement including sequential approaches and optimization approaches. As an excellent optimization approach on the basis of energy minimization principles, elastic beams model has been used in resolving displacement problem of roads and buildings for several times. However, to realize a complete displacement solution, techniques of conflict detection and spatial context analysis should be also take into consideration. So we proposed a complete solution of displacement based on the combined use of elastic beams model and constrained Delaunay triangulation (CDT) in this paper. The solution designed as a cyclic and iterative process containing two phases: detection phase and displacement phase. In detection phase, CDT of map is use to detect proximity conflicts, identify spatial relationships and structures, and construct auxiliary structure, so as to support the displacement phase on the basis of elastic beams. In addition, for the improvements of displacement algorithm, a method for adaptive parameters setting and a new iterative strategy are put forward. Finally, we implemented our solution on a testing map generalization platform, and successfully tested it against 2 hand-generated test datasets of roads and buildings respectively.

  12. Indoor Trajectory Tracking Scheme Based on Delaunay Triangulation and Heuristic Information in Wireless Sensor Networks.

    Science.gov (United States)

    Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong

    2017-06-02

    Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.

  13. Classification and Filtering of Constrained Delaunay Triangulation for Automated Building Aggregation

    Directory of Open Access Journals (Sweden)

    GUO Peipei

    2016-08-01

    Full Text Available Building aggregation is an important part of research on large scale map generalization. A triangulation based approach is proposed from the perspective of shape features, six measure parameters of triangles in a constrained Delaunay triangulation are proposed. First of all, use the six measure parameters to determine which triangles are retained and which are erased. Then, the contours of retained triangles, as bridge areas between buildings, are automatically identified and right angle processed. And then, the buildings are aggregated with right angle feature retained by merging the bridge areas with connecting buildings. Finally, the approach is verified by being carried out on actual data. Experimental result shows that it is efficient and practical.

  14. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  15. The Extraction of Road Boundary from Crowdsourcing Trajectory Using Constrained Delaunay Triangulation

    Directory of Open Access Journals (Sweden)

    YANG Wei

    2017-02-01

    Full Text Available Extraction of road boundary accurately from crowdsourcing trajectory lines is still a hard work.Therefore,this study presented a new approach to use vehicle trajectory lines to extract road boundary.Firstly, constructing constrained Delaunay triangulation within interpolated track lines to calculate road boundary descriptors using triangle edge length and Voronoi cell.Road boundary recognition model was established by integrating the two boundary descriptors.Then,based on seed polygons,a regional growing method was proposed to extract road boundary. Finally, taxi GPS traces in Beijing were used to verify the validity of the novel method, and the results also showed that our method was suitable for GPS traces with disparity density,complex road structure and different time interval.

  16. A new approach for categorizing pig lying behaviour based on a Delaunay triangulation method.

    Science.gov (United States)

    Nasirahmadi, A; Hensel, O; Edwards, S A; Sturm, B

    2017-01-01

    Machine vision-based monitoring of pig lying behaviour is a fast and non-intrusive approach that could be used to improve animal health and welfare. Four pens with 22 pigs in each were selected at a commercial pig farm and monitored for 15 days using top view cameras. Three thermal categories were selected relative to room setpoint temperature. An image processing technique based on Delaunay triangulation (DT) was utilized. Different lying patterns (close, normal and far) were defined regarding the perimeter of each DT triangle and the percentages of each lying pattern were obtained in each thermal category. A method using a multilayer perceptron (MLP) neural network, to automatically classify group lying behaviour of pigs into three thermal categories, was developed and tested for its feasibility. The DT features (mean value of perimeters, maximum and minimum length of sides of triangles) were calculated as inputs for the MLP classifier. The network was trained, validated and tested and the results revealed that MLP could classify lying features into the three thermal categories with high overall accuracy (95.6%). The technique indicates that a combination of image processing, MLP classification and mathematical modelling can be used as a precise method for quantifying pig lying behaviour in welfare investigations.

  17. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  18. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    Science.gov (United States)

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  19. ConnectViz: Accelerated Approach for Brain Structural Connectivity Using Delaunay Triangulation.

    Science.gov (United States)

    Adeshina, A M; Hashim, R

    2016-03-01

    nodes and the edges. The framework is very efficient in providing greater interactivity as a way of representing the nodes and the edges intuitively, all achieved at a considerably interactive speed for instantaneous mapping of the datasets' features. Uniquely, the connectomic algorithm performed remarkably fast with normal hardware requirement specifications.

  20. Kinetic and dynamic Delaunay tetrahedralizations in three dimensions

    Science.gov (United States)

    Schaller, Gernot; Meyer-Hermann, Michael

    2004-09-01

    We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.

  1. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM

    Czech Academy of Sciences Publication Activity Database

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-01-01

    Roč. 45, č. 5 (2016), s. 443-461 ISSN 0175-7571 R&D Projects: GA ČR(CZ) GA13-02033S; GA MŠk(CZ) ED1.1.00/02.0109 Institutional support: RVO:67985823 Keywords : 3D object segmentation * Delaunay algorithm * principal component analysis * 3D super-resolution microscopy * nucleoids * mitochondrial DNA replication Subject RIV: BO - Biophysics Impact factor: 1.472, year: 2016

  2. A methodology for automated cartographic data input, drawing and editing using kinetic Delaunay/Voronoi diagrams

    DEFF Research Database (Denmark)

    Gold, Christopher M.; Mioc, Darka; Anton, François

    2008-01-01

    This chapter presents a methodology for automated cartographic data in- put, drawing and editing. This methodology is based on kinematic algorithms for point and line Delaunay triangulation and the Voronoi diagram. It allows one to automate some parts of the manual digitization process......-oriented algorithm for large data sets, and all our algorithms are based on local operations (except for basic point location). Because the deletion of individual points or line segments is a necessary part of the manual editing process, incremental insertion and deletion is used. The original concept used here...

  3. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  4. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  5. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  6. linear time algorithm for finding the convex ropes between two vertices of a simple polygon without triangulation

    International Nuclear Information System (INIS)

    Phan Thanh An

    2008-06-01

    The convex rope problem, posed by Peshkin and Sanderson in IEEE J. Robotics Automat, 2 (1986) pp. 53-58, is to find the counterclockwise and clockwise convex ropes starting at the vertex a and ending at the vertex b of a simple polygon, where a is on the boundary of the convex hull of the polygon and b is visible from infinity. In this paper, we present a linear time algorithm for solving this problem without resorting to a linear-time triangulation algorithm and without resorting to a convex hull algorithm for the polygon. The counterclockwise (clockwise, respectively) convex rope consists of two polylines obtained in a basic incremental strategy described in convex hull algorithms for the polylines forming the polygon from a to b. (author)

  7. An improved three-dimension reconstruction method based on guided filter and Delaunay

    Science.gov (United States)

    Liu, Yilin; Su, Xiu; Liang, Haitao; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong

    2018-01-01

    Binocular stereo vision is becoming a research hotspot in the area of image processing. Based on traditional adaptive-weight stereo matching algorithm, we improve the cost volume by averaging the AD (Absolute Difference) of RGB color channels and adding x-derivative of the grayscale image to get the cost volume. Then we use guided filter in the cost aggregation step and weighted median filter for post-processing to address the edge problem. In order to get the location in real space, we combine the deep information with the camera calibration to project each pixel in 2D image to 3D coordinate matrix. We add the concept of projection to region-growing algorithm for surface reconstruction, its specific operation is to project all the points to a 2D plane through the normals of clouds and return the results back to 3D space according to these connection relationship among the points in 2D plane. During the triangulation in 2D plane, we use Delaunay algorithm because it has optimal quality of mesh. We configure OpenCV and pcl on Visual Studio for testing, and the experimental results show that the proposed algorithm have higher computational accuracy of disparity and can realize the details of the real mesh model.

  8. Efficient Delaunay Tessellation through K-D Tree Decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy; Peterka, Tom

    2017-08-21

    Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluate the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.

  9. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  10. Label triangulation

    International Nuclear Information System (INIS)

    May, R.P.

    1983-01-01

    Label Triangulation (LT) with neutrons allows the investigation of the quaternary structure of biological multicomponent complexes under native conditions. Provided that the complex can be fully separated into and reconstituted from its single - protonated and deuterated - components, small angle neutron scattering (SANS) can give selective information on shapes and pair distances of these components. Following basic geometrical rules, the spatial arrangement of the components can be reconstructed from these data. LT has so far been successfully applied to the small and large ribosomal subunits and the transcriptase of E. coli. (author)

  11. Triangulation positioning system network

    Directory of Open Access Journals (Sweden)

    Sfendourakis Marios

    2017-01-01

    Full Text Available This paper presents ongoing work on localization and positioning through triangulation procedure for a Fixed Sensors Network - FSN.The FSN has to work as a system.As the triangulation problem becomes high complicated in a case with large numbers of sensors and transmitters, an adequate grid topology is needed in order to tackle the detection complexity.For that reason a Network grid topology is presented and areas that are problematic and need further analysis are analyzed.The Network System in order to deal with problems of saturation and False Triangulations - FTRNs will have to find adequate methods in every sub-area of the Area Of Interest - AOI.Also, concepts like Sensor blindness and overall Network blindness, are presented. All these concepts affect the Network detection rate and its performance and ought to be considered in a way that the network overall performance won’t be degraded.Network performance should be monitored contentiously, with right algorithms and methods.It is also shown that as the number of TRNs and FTRNs is increased Detection Complexity - DC is increased.It is hoped that with further research all the characteristics of a triangulation system network for positioning will be gained and the system will be able to perform autonomously with a high detection rate.

  12. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  13. Triangulating and guarding realistic polygons

    NARCIS (Netherlands)

    Aloupis, G.; Bose, P.; Dujmovic, V.; Gray, C.M.; Langerman, S.; Speckmann, B.

    2008-01-01

    We propose a new model of realistic input: k-guardable objects. An object is k-guardable if its boundary can be seen by k guards in the interior of the object. In this abstract, we describe a simple algorithm for triangulating k-guardable polygons. Our algorithm, which is easily implementable, takes

  14. Triangulating and guarding realistic polygons

    NARCIS (Netherlands)

    Aloupis, G.; Bose, P.; Dujmovic, V.; Gray, C.M.; Langerman, S.; Speckmann, B.

    2014-01-01

    We propose a new model of realistic input: k-guardable objects. An object is k-guardable if its boundary can be seen by k guards. We show that k-guardable polygons generalize two previously identified classes of realistic input. Following this, we give two simple algorithms for triangulating

  15. Surface Coverage in Wireless Sensor Networks Based on Delaunay Tetrahedralization

    International Nuclear Information System (INIS)

    Ribeiro, M G; Neves, L A; Zafalon, G F D; Valêncio, C; Pinto, A R; Nascimento, M Z

    2015-01-01

    In this work is presented a new method for sensor deployment on 3D surfaces. The method was structured on different steps. The first one aimed discretizes the relief of interest with Delaunay algorithm. The tetrahedra and relative values (spatial coordinates of each vertex and faces) were input to construction of 3D Voronoi diagram. Each circumcenter was calculated as a candidate position for a sensor node: the corresponding circular coverage area was calculated based on a radius r. The r value can be adjusted to simulate different kinds of sensors. The Dijkstra algorithm and a selection method were applied to eliminate candidate positions with overlapped coverage areas or beyond of surface of interest. Performance evaluations measures were defined using coverage area and communication as criteria. The results were relevant, once the mean coverage rate achieved on three different surfaces were among 91% and 100%

  16. A three-dimensional electrostatic particle-in-cell methodology on unstructured Delaunay-Voronoi grids

    International Nuclear Information System (INIS)

    Gatsonis, Nikolaos A.; Spirkin, Anton

    2009-01-01

    The mathematical formulation and computational implementation of a three-dimensional particle-in-cell methodology on unstructured Delaunay-Voronoi tetrahedral grids is presented. The method allows simulation of plasmas in complex domains and incorporates the duality of the Delaunay-Voronoi in all aspects of the particle-in-cell cycle. Charge assignment and field interpolation weighting schemes of zero- and first-order are formulated based on the theory of long-range constraints. Electric potential and fields are derived from a finite-volume formulation of Gauss' law using the Voronoi-Delaunay dual. Boundary conditions and the algorithms for injection, particle loading, particle motion, and particle tracking are implemented for unstructured Delaunay grids. Error and sensitivity analysis examines the effects of particles/cell, grid scaling, and timestep on the numerical heating, the slowing-down time, and the deflection times. The problem of current collection by cylindrical Langmuir probes in collisionless plasmas is used for validation. Numerical results compare favorably with previous numerical and analytical solutions for a wide range of probe radius to Debye length ratios, probe potentials, and electron to ion temperature ratios. The versatility of the methodology is demonstrated with the simulation of a complex plasma microsensor, a directional micro-retarding potential analyzer that includes a low transparency micro-grid.

  17. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum

  18. Triangulation Made Easy

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P

    2009-12-23

    We describe a simple and efficient algorithm for two-view triangulation of 3D points from approximate 2D matches based on minimizing the L2 reprojection error. Our iterative algorithm improves on the one by Kanatani et al. by ensuring that in each iteration the epipolar constraint is satisfied. In the case where the two cameras are pointed in the same direction, the method provably converges to an optimal solution in exactly two iterations. For more general camera poses, two iterations are sufficient to achieve convergence to machine precision, which we exploit to devise a fast, non-iterative method. The resulting algorithm amounts to little more than solving a quadratic equation, and involves a fixed, small number of simple matrixvector operations and no conditional branches. We demonstrate that the method computes solutions that agree to very high precision with those of Hartley and Sturm's original polynomial method, though achieves higher numerical stability and 1-4 orders of magnitude greater speed.

  19. Three-Dimensional TIN Algorithm for Digital Terrain Modeling%数字地形建模的真三维TIN算法研究

    Institute of Scientific and Technical Information of China (English)

    朱庆; 张叶廷; 李逢春

    2008-01-01

    The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is proposed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighboring triangle location method by making full use of the surface normal information. Experimental results prove that this algorithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automatically reconstructed surface has only small topological difference from the true surface. This algorithm has potential applications to virtual environments, computer vision, and so on.

  20. Simulating triangulations. Graphs, manifolds and (quantum) spacetime

    International Nuclear Information System (INIS)

    Krueger, Benedikt

    2016-01-01

    Triangulations, which can intuitively be described as a tessellation of space into simplicial building blocks, are structures that arise in various different branches of physics: They can be used for describing complicated and curved objects in a discretized way, e.g., in foams, gels or porous media, or for discretizing curved boundaries for fluid simulations or dissipative systems. Interpreting triangulations as (maximal planar) graphs makes it possible to use them in graph theory or statistical physics, e.g., as small-world networks, as networks of spins or in biological physics as actin networks. Since one can find an analogue of the Einstein-Hilbert action on triangulations, they can even be used for formulating theories of quantum gravity. Triangulations have also important applications in mathematics, especially in discrete topology. Despite their wide occurrence in different branches of physics and mathematics, there are still some fundamental open questions about triangulations in general. It is a prior unknown how many triangulations there are for a given set of points or a given manifold, or even whether there are exponentially many triangulations or more, a question that relates to a well-defined behavior of certain quantum geometry models. Another major unknown question is whether elementary steps transforming triangulations into each other, which are used in computer simulations, are ergodic. Using triangulations as model for spacetime, it is not clear whether there is a meaningful continuum limit that can be identified with the usual and well-tested theory of general relativity. Within this thesis some of these fundamental questions about triangulations are answered by the use of Markov chain Monte Carlo simulations, which are a probabilistic method for calculating statistical expectation values, or more generally a tool for calculating high-dimensional integrals. Additionally, some details about the Wang-Landau algorithm, which is the primary used

  1. Simulating triangulations. Graphs, manifolds and (quantum) spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Benedikt

    2016-07-01

    Triangulations, which can intuitively be described as a tessellation of space into simplicial building blocks, are structures that arise in various different branches of physics: They can be used for describing complicated and curved objects in a discretized way, e.g., in foams, gels or porous media, or for discretizing curved boundaries for fluid simulations or dissipative systems. Interpreting triangulations as (maximal planar) graphs makes it possible to use them in graph theory or statistical physics, e.g., as small-world networks, as networks of spins or in biological physics as actin networks. Since one can find an analogue of the Einstein-Hilbert action on triangulations, they can even be used for formulating theories of quantum gravity. Triangulations have also important applications in mathematics, especially in discrete topology. Despite their wide occurrence in different branches of physics and mathematics, there are still some fundamental open questions about triangulations in general. It is a prior unknown how many triangulations there are for a given set of points or a given manifold, or even whether there are exponentially many triangulations or more, a question that relates to a well-defined behavior of certain quantum geometry models. Another major unknown question is whether elementary steps transforming triangulations into each other, which are used in computer simulations, are ergodic. Using triangulations as model for spacetime, it is not clear whether there is a meaningful continuum limit that can be identified with the usual and well-tested theory of general relativity. Within this thesis some of these fundamental questions about triangulations are answered by the use of Markov chain Monte Carlo simulations, which are a probabilistic method for calculating statistical expectation values, or more generally a tool for calculating high-dimensional integrals. Additionally, some details about the Wang-Landau algorithm, which is the primary used

  2. Triangulated categories (AM-148)

    CERN Document Server

    Neeman, Amnon

    2014-01-01

    The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This

  3. Observation, innovation and triangulation

    DEFF Research Database (Denmark)

    Hetmar, Vibeke

    2007-01-01

    on experiences from a pilot project in three different classrooms methodological possibilities and problems are presented and discussed: 1) educational criticism, including the concepts of positions, perspectives and connoisseurship, 2) classroom observations and 3) triangulation as a methodological tool....

  4. Moment analysis of the Delaunay tessellation field estimator

    NARCIS (Netherlands)

    Lieshout, van M.N.M.

    2009-01-01

    The Campbell–Mecke theorem is used to derive explicit expressions for the mean and variance of Schaap and Van de Weygaert’s Delaunay tessellation field estimator. Special attention is paid to Poisson processes.

  5. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  6. An overview of the stereo correlation and triangulation formulations used in DICe.

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  7. Dynamical triangulated fermionic surfaces

    International Nuclear Information System (INIS)

    Ambjoern, J.; Varsted, S.

    1990-12-01

    We perform Monte Carlo simulations of randomly triangulated random surfaces which have fermionic world-sheet scalars θ i associated with each vertex i in addition to the usual bosonic world-sheet scalar χ i μ . The fermionic degrees of freedom force the internal metrics of the string to be less singular than the internal metric of the pure bosonic string. (orig.)

  8. Triangulation in rewriting

    NARCIS (Netherlands)

    Oostrom, V. van; Zantema, Hans

    2012-01-01

    We introduce a process, dubbed triangulation, turning any rewrite relation into a confluent one. It is more direct than usual completion, in the sense that objects connected by a peak are directly oriented rather than their normal forms. We investigate conditions under which this process preserves

  9. An Adaptive Sweep-Circle Spatial Clustering Algorithm Based on Gestalt

    Directory of Open Access Journals (Sweden)

    Qingming Zhan

    2017-08-01

    Full Text Available An adaptive spatial clustering (ASC algorithm is proposed in this present study, which employs sweep-circle techniques and a dynamic threshold setting based on the Gestalt theory to detect spatial clusters. The proposed algorithm can automatically discover clusters in one pass, rather than through the modification of the initial model (for example, a minimal spanning tree, Delaunay triangulation, or Voronoi diagram. It can quickly identify arbitrarily-shaped clusters while adapting efficiently to non-homogeneous density characteristics of spatial data, without the need for prior knowledge or parameters. The proposed algorithm is also ideal for use in data streaming technology with dynamic characteristics flowing in the form of spatial clustering in large data sets.

  10. The use of triangulation in qualitative research.

    Science.gov (United States)

    Carter, Nancy; Bryant-Lukosius, Denise; DiCenso, Alba; Blythe, Jennifer; Neville, Alan J

    2014-09-01

    Triangulation refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena (Patton, 1999). Triangulation also has been viewed as a qualitative research strategy to test validity through the convergence of information from different sources. Denzin (1978) and Patton (1999) identified four types of triangulation: (a) method triangulation, (b) investigator triangulation, (c) theory triangulation, and (d) data source triangulation. The current article will present the four types of triangulation followed by a discussion of the use of focus groups (FGs) and in-depth individual (IDI) interviews as an example of data source triangulation in qualitative inquiry.

  11. A grand-canonical ensemble of randomly triangulated surfaces

    International Nuclear Information System (INIS)

    Jurkiewicz, J.; Krzywicki, A.; Petersson, B.

    1986-01-01

    An algorithm is presented generating the grand-canonical ensemble of discrete, randomly triangulated Polyakov surfaces. The algorithm is used to calculate the susceptibility exponent, which controls the existence of the continuum limit of the considered model, for the dimensionality of the embedding space ranging from 0 to 20. (orig.)

  12. Invariants of the Dirichlet/Voronoi Tilings of Hyperspheres in Rn and their Dual Delone/Delaunay Graphs

    DEFF Research Database (Denmark)

    Antón Castro, Francesc/François

    2015-01-01

    In this paper, we are addressing the geometric and topological invariants that arise in the exact computation of the Delone (Delaunay) graph and the Dirichlet/Voronoi tiling of N-dimensional hyperspheres using Ritt-Wu's algorithm. Our main contribution is a methodology for automated derivation...... of geometric and topological invariants of the Dirichlet tiling of N + 1-dimenional hyperspheres and its dual Delone graph from the invariants of the Dirichlet tiling of N-dimensional hyperspheres and its dual Delone graph (starting from N = 3)....

  13. Invariants of the dirichlet/voronoi tilings of hyperspheres in RN and their dual delone/delaunay graphs

    DEFF Research Database (Denmark)

    Anton, François

    In this paper, we are addressing the geometric and topological invariants that arise in the exact computation of the Delone (Delaunay) graph and the Dirichlet/Voronoi tiling of n-dimensional hyperspheres using Ritt-Wu's algorithm. Our main contribution is a methodology for automated derivation...... of geometric and topological invariants of the Dirichlet tiling of N + 1-dimenional hyperspheres and its dual Delone graph from the invariants of the Dirichlet tiling of N-dimensional hyperspheres and its dual Delone graph (starting from N = 3)....

  14. Obtaining the Andersen's chart, triangulation algorithm

    DEFF Research Database (Denmark)

    Sabaliauskas, Tomas; Ibsen, Lars Bo

    Andersen’s chart (Andersen & Berre, 1999) is a graphical method of observing cyclic soil response. It allows observing soil response to various stress amplitudes that can lead to liquefaction, excess plastic deformation or stabilizing soil response. The process of obtaining the original chart has...

  15. Triangulation in Friedmann's cosmological model

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1977-01-01

    In Friedmann's model, physical 3-space has a curvature K = constant. In the cases of greatest interest (K different from 0) triangulation for the measurement of great distances should be based on non-Euclidean geometries: Riemannian (or doubly elliptic) geometry for a closed universe and Bolyai-Lobatchevsky's (or hiperbolic) geometry for an open universe [pt

  16. On the stretch factor of convex Delaunay graphs

    Directory of Open Access Journals (Sweden)

    Prosenjit Bose

    2010-06-01

    Full Text Available Let C be a compact and convex set in the plane that contains the origin in its interior, and let S be a finite set of points in the plane. The Delaunay graph DGC(S of S is defined to be the dual of the Voronoi diagram of S with respect to the convex distance function defined by C. We prove that DGC(S is a t-spanner for S, for some constant t that depends only on the shape of the set C. Thus, for any two points p and q in S, the graph DGC(S contains a path between p and q whose Euclidean length is at most t times the Euclidean distance between p and q.

  17. Application of Delaunay tessellation for the characterization of solute-rich clusters in atom probe tomography

    International Nuclear Information System (INIS)

    Lefebvre, W.; Philippe, T.; Vurpillot, F.

    2011-01-01

    This work presents an original method for cluster selection in Atom Probe Tomography designed to be applied to large datasets. It is based on the calculation of the Delaunay tessellation generated by the distribution of atoms of a selected element. It requires a single input parameter from the user. Furthermore, no prior knowledge of the material is needed. The sensitivity of the proposed Delaunay cluster selection is demonstrated by its application on simulated APT datasets. A strong advantage of the proposed methodology is that it is reinforced by the availability of an analytical model for the distribution of Delaunay cells circumspheres, which is used to control the accuracy of the cluster selection procedure. Another advantage of the Delaunay cluster selection is the direct calculation of a sharp envelope for each identified cluster or precipitate, which leads to the more appropriate morphology of the objects as they are reconstructed in the APT dataset. -- Research Highligthts: →Original method for cluster selection in Atom Probe Tomography. →Delaunay tessellation generated by the distribution of solute atoms. →Direct calculation of a sharp envelope for each identified cluster or precipitate. →Delaunay cluster selection demonstrated by its application on simulated APT datasets.

  18. The DEEP2 Galaxy Redshift Survey: The Voronoi-Delaunay Method Catalog of Galaxy Groups

    Energy Technology Data Exchange (ETDEWEB)

    Gerke, Brian F.; /UC, Berkeley; Newman, Jeffrey A.; /LBNL, NSD; Davis, Marc; /UC, Berkeley /UC, Berkeley, Astron.Dept.; Marinoni, Christian; /Brera Observ.; Yan, Renbin; Coil, Alison L.; Conroy, Charlie; Cooper, Michael C.; /UC, Berkeley, Astron.Dept.; Faber, S.M.; /Lick Observ.; Finkbeiner, Douglas P.; /Princeton U. Observ.; Guhathakurta, Puragra; /Lick Observ.; Kaiser, Nick; /Hawaii U.; Koo, David C.; Phillips, Andrew C.; /Lick Observ.; Weiner, Benjamin J.; /Maryland U.

    2012-02-14

    We use the first 25% of the DEEP2 Galaxy Redshift Survey spectroscopic data to identify groups and clusters of galaxies in redshift space. The data set contains 8370 galaxies with confirmed redshifts in the range 0.7 {<=} z {<=} 1.4, over one square degree on the sky. Groups are identified using an algorithm (the Voronoi-Delaunay Method) that has been shown to accurately reproduce the statistics of groups in simulated DEEP2-like samples. We optimize this algorithm for the DEEP2 survey by applying it to realistic mock galaxy catalogs and assessing the results using a stringent set of criteria for measuring group-finding success, which we develop and describe in detail here. We find in particular that the group-finder can successfully identify {approx}78% of real groups and that {approx}79% of the galaxies that are true members of groups can be identified as such. Conversely, we estimate that {approx}55% of the groups we find can be definitively identified with real groups and that {approx}46% of the galaxies we place into groups are interloper field galaxies. Most importantly, we find that it is possible to measure the distribution of groups in redshift and velocity dispersion, n({sigma}, z), to an accuracy limited by cosmic variance, for dispersions greater than 350 km s{sup -1}. We anticipate that such measurements will allow strong constraints to be placed on the equation of state of the dark energy in the future. Finally, we present the first DEEP2 group catalog, which assigns 32% of the galaxies to 899 distinct groups with two or more members, 153 of which have velocity dispersions above 350 km s{sup -1}. We provide locations, redshifts and properties for this high-dispersion subsample. This catalog represents the largest sample to date of spectroscopically detected groups at z {approx} 1.

  19. Interferometer predictions with triangulated images

    DEFF Research Database (Denmark)

    Brinch, Christian; Dullemond, C. P.

    2014-01-01

    the synthetic model images. To get the correct values of these integrals, the model images must have the right size and resolution. Insufficient care in these choices can lead to wrong results. We present a new general-purpose scheme for the computation of visibilities of radiative transfer images. Our method...... requires a model image that is a list of intensities at arbitrarily placed positions on the image-plane. It creates a triangulated grid from these vertices, and assumes that the intensity inside each triangle of the grid is a linear function. The Fourier integral over each triangle is then evaluated...... with an analytic expression and the complex visibility of the entire image is then the sum of all triangles. The result is a robust Fourier transform that does not suffer from aliasing effects due to grid regularities. The method automatically ensures that all structure contained in the model gets reflected...

  20. Mixed Methods, Triangulation, and Causal Explanation

    Science.gov (United States)

    Howe, Kenneth R.

    2012-01-01

    This article distinguishes a disjunctive conception of mixed methods/triangulation, which brings different methods to bear on different questions, from a conjunctive conception, which brings different methods to bear on the same question. It then examines a more inclusive, holistic conception of mixed methods/triangulation that accommodates…

  1. Hamiltonian Cycles on Random Eulerian Triangulations

    DEFF Research Database (Denmark)

    Guitter, E.; Kristjansen, C.; Nielsen, Jakob Langgaard

    1998-01-01

    . Considering the case n -> 0, this implies that the system of random Eulerian triangulations equipped with Hamiltonian cycles describes a c=-1 matter field coupled to 2D quantum gravity as opposed to the system of usual random triangulations equipped with Hamiltonian cycles which has c=-2. Hence, in this case...

  2. Random discrete Morse theory and a new library of triangulations

    DEFF Research Database (Denmark)

    Benedetti, Bruno; Lutz, Frank Hagen

    2014-01-01

    We introduce random discrete Morse theory as a computational scheme to measure the complexity of a triangulation. The idea is to try to quantify the frequency of discrete Morse matchings with few critical cells. Our measure will depend on the topology of the space, but also on how nicely the space...... is triangulated. The scheme we propose looks for optimal discrete Morse functions with an elementary random heuristic. Despite its naiveté, this approach turns out to be very successful even in the case of huge inputs. In our view, the existing libraries of examples in computational topology are “too easy......” for testing algorithms based on discrete Morse theory. We propose a new library containing more complicated (and thus more meaningful) test examples....

  3. UAV PHOTOGRAMMETRY: BLOCK TRIANGULATION COMPARISONS

    Directory of Open Access Journals (Sweden)

    R. Gini

    2013-08-01

    Full Text Available UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord", useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey, allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma, Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.

  4. Measuring and Controlling Fairness of Triangulations

    KAUST Repository

    Jiang, Caigui; Gü nther, Felix; Wallner, Johannes; Pottmann, Helmut

    2016-01-01

    of fairness must take new aspects into account. We use concepts from discrete differential geometry (star-shaped Gauss images) to express fairness, and we also demonstrate how fairness can be incorporated into interactive geometric design of triangulated

  5. Recent development of micro-triangulation for magnet fiducialisation

    CERN Document Server

    Vlachakis, Vasileios; Mainaud Durand, Helene; CERN. Geneva. ATS Department

    2016-01-01

    The micro-triangulation method is proposed as an alternative for magnet fiducialisation. The main objective is to measure horizontal and vertical angles to fiducial points and stretched wires, utilising theodolites equipped with cameras. This study aims to develop various methods, algorithms and software tools to enable the data acquisition and processing. In this paper, we present the first test measurement as an attempt to demonstrate the feasibility of the method and to evaluate the accuracy. The preliminary results are very promising, with accuracy always better than 20 μm for the wire position, and of about40 μm/m for the wire orientation, compared with a coordinate measuring machine.

  6. Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform

    Science.gov (United States)

    Unaldi, Numan; Temel, Samil; Asari, Vijayan K.

    2012-01-01

    One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078

  7. Nonequilibrium phase transition in directed small-world-Voronoi-Delaunay random lattices

    International Nuclear Information System (INIS)

    Lima, F.W.S.

    2016-01-01

    On directed small-world-Voronoi-Delaunay random lattices in two dimensions with quenched connectivity disorder we study the critical properties of the dynamics evolution of public opinion in social influence networks using a simple spin-like model. The system is treated by applying Monte Carlo simulations. We show that directed links on these random lattices may lead to phase diagram with first- and second-order social phase transitions out of equilibrium. (paper)

  8. Applications of Voronoi and Delaunay Diagrams in the solution of the geodetic boundary value problem

    Directory of Open Access Journals (Sweden)

    C. A. B. Quintero

    Full Text Available Voronoi and Delaunay structures are presented as discretization tools to be used in numerical surface integration aiming the computation of geodetic problems solutions, when under the integral there is a non-analytical function (e. g., gravity anomaly and height. In the Voronoi approach, the target area is partitioned into polygons which contain the observed point and no interpolation is necessary, only the original data is used. In the Delaunay approach, the observed points are vertices of triangular cells and the value for a cell is interpolated for its barycenter. If the amount and distribution of the observed points are adequate, gridding operation is not required and the numerical surface integration is carried out by point-wise. Even when the amount and distribution of the observed points are not enough, the structures of Voronoi and Delaunay can combine grid with observed points in order to preserve the integrity of the original information. Both schemes are applied to the computation of the Stokes' integral, the terrain correction, the indirect effect and the gradient of the gravity anomaly, in the State of Rio de Janeiro, Brazil area.

  9. Gaussian vector fields on triangulated surfaces

    DEFF Research Database (Denmark)

    Ipsen, John H

    2016-01-01

    proven to be very useful to resolve the complex interplay between in-plane ordering of membranes and membrane conformations. In the present work we have developed a procedure for realistic representations of Gaussian models with in-plane vector degrees of freedoms on a triangulated surface. The method...

  10. Altitude, Orthocenter of a Triangle and Triangulation

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2016-03-01

    Full Text Available We introduce the altitudes of a triangle (the cevians perpendicular to the opposite sides. Using the generalized Ceva’s Theorem, we prove the existence and uniqueness of the orthocenter of a triangle [7]. Finally, we formalize in Mizar [1] some formulas [2] to calculate distance using triangulation.

  11. Triangulation applied to Jan H. van Bemmel

    NARCIS (Netherlands)

    Hasman, A.; Bergemann, D.; McCray, A. T.; Talmon, J. L.; Zvárová, J.

    2006-01-01

    OBJECTIVE: To describe the person of Jan H. van Bemmel from different points of view. METHOD: Triangulation. RESULTS AND CONCLUSIONS: Jan H. van Bemmel successfully contributed to research and education in medical informatics. He inspired a lot of people in The Netherlands and internationally

  12. Tradeoffs in Design Research: Development Oriented Triangulation

    NARCIS (Netherlands)

    Koen van Turnhout; Sabine Craenmehr; Robert Holwerda; Mark Menijn; Jan-Pieter Zwart; René Bakker

    2013-01-01

    The Development Oriented Triangulation (DOT) framework in this paper can spark and focus the debate about mixed-method approaches in HCI. The framework can be used to classify HCI methods, create mixed-method designs, and to align research activities in multidisciplinary projects. The framework is

  13. Generation of triangulated random surfaces by the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analog of the Polyakov string is considered. An algorithm is proposed which enables one to study the model by the Monte Carlo method in the grand canonical ensemble. Preliminary results on the determination of the critical index γ are presented

  14. Investigation of point triangulation methods for optimality and performance in Structure from Motion systems

    DEFF Research Database (Denmark)

    Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...

  15. Numerical Validation of the Delaunay Normalization and the Krylov-Bogoliubov-Mitropolsky Method

    Directory of Open Access Journals (Sweden)

    David Ortigosa

    2014-01-01

    Full Text Available A scalable second-order analytical orbit propagator programme based on modern and classical perturbation methods is being developed. As a first step in the validation and verification of part of our orbit propagator programme, we only consider the perturbation produced by zonal harmonic coefficients in the Earth’s gravity potential, so that it is possible to analyze the behaviour of the mathematical expressions involved in Delaunay normalization and the Krylov-Bogoliubov-Mitropolsky method in depth and determine their limits.

  16. Dynamically triangulated surfaces - some analytical results

    International Nuclear Information System (INIS)

    Kostov, I.K.

    1987-01-01

    We give a brief review of the analytical results concerning the model of dynamically triangulated surfaces. We will discuss the possible types of critical behaviour (depending on the dimension D of the embedding space) and the exact solutions obtained for D=0 and D=-2. The latter are important as a check of the Monte Carlo simulations applyed to study the model in more physical dimensions. They give also some general insight of its critical properties

  17. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  18. Accuracy enhancement of point triangulation probes for linear displacement measurement

    Science.gov (United States)

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  19. Stereo-tomography in triangulated models

    Science.gov (United States)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  20. Marginal elasticity of periodic triangulated origami

    Science.gov (United States)

    Chen, Bryan; Sussman, Dan; Lubensky, Tom; Santangelo, Chris

    Origami, the classical art of folding paper, has inspired much recent work on assembling complex 3D structures from planar sheets. Origami, and more generally hinged structures with rigid panels, where all faces are triangles have special properties due to having a bulk balance of mechanical degrees of freedom and constraints. We study two families of periodic triangulated origami structures, one based on the Miura ori and one based on a kagome-like pattern due to Ron Resch. We point out the consequences of the balance of degrees of freedom and constraints for these ''metamaterial plates'' and show how the elasticity can be tuned by changing the unit cell geometry.

  1. Methodological triangulation in work life research

    DEFF Research Database (Denmark)

    Warring, Niels

    Based on examples from two research projects on preschool teachers' work, the paper will discuss potentials and challenges in methodological triangulation in work life research. Analysis of ethnographic and phenomenological inspired observations of everyday life in day care centers formed the basis...... for individual interviews and informal talks with employees. The interviews and conversations were based on a critical hermeneutic approach. The analysis of observations and interviews constituted a knowledge base as the project went in to the last phase: action research workshops. In the workshops findings from...

  2. Triangulation-based 3D surveying borescope

    Science.gov (United States)

    Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.

    2016-04-01

    In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.

  3. TRIANGULATION OF METHODS OF CAREER EDUCATION

    Directory of Open Access Journals (Sweden)

    Marija Turnsek Mikacic

    2015-09-01

    Full Text Available This paper is an overview of the current research in the field of career education and career planning. Presented results constitute a model based on the insight into different theories and empirical studies about career planning as a building block of personal excellence. We defined credibility, transferability and reliability of the research by means of triangulation. As sources of data of triangulation we included essays of participants of education and questionnaires. Qualitative analysis represented the framework for the construction of the paradigmatic model and the formulation of the final theory. We formulated a questionnaire on the basis of our own experiences in the area of the education of individuals. The quantitative analysis, based on the results of the interviews, confirms the following three hypotheses: The individuals who elaborated a personal career plan and acted accordingly, changed their attitudes towards their careers and took control over their lives; in addition, they achieved a high level of self-esteem and self-confidence, in tandem with the perception of personal excellence, in contrast to the individuals who did not participate in career education and did not elaborate a career plan. We used the tools of NLP (neurolinguistic programming as an additional technique at learning.

  4. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  5. Algorithm that mimics human perceptual grouping of dot patterns

    NARCIS (Netherlands)

    Papari, G.; Petkov, N.; Gregorio, MD; DiMaio,; Frucci, M; Musio, C

    2005-01-01

    We propose an algorithm that groups points similarly to how human observers do. It is simple, totally unsupervised and able to find clusters of complex and not necessarily convex shape. Groups are identified as the connected components of a Reduced Delaunay Graph (RDG) that we define in this paper.

  6. Fates Intertwining, Cultures Connection. Impact of Delaunay Couple Research Activity on Gabriel Guevrekian’s Oeuvre

    Directory of Open Access Journals (Sweden)

    Elena V. Zabelina

    2012-08-01

    Full Text Available The article deals with an avant-garde trend simultaneism, which primarily emerged in painting, then developed in textile painting, clothes design and cinematography. The article attempts to interpret simultaneism theoretically as a holistic phenomenon and determine its place in the art of the XX century. Certain displays of simultaneism trend in painting and textile design have been studied well. But special displays of simultaneism in cinematography and architecture have been studied insufficiently. This fact prevents interpreting simultaneism as a holistic phenomenon. To study the impact of simultaneism on landscape architecture, the article uses such scientific methods as stylistic-formal analysis and comparative analysis. The article discloses principles of simultaneism, the trend, which originates at the intersection of art and science, traces the impact of Robert and Sonia Delaunay research activities on Guevrekian’s creative concept. The author introduces for the scientific use some sources, unknown to the domestic study of art.

  7. Los Triángulos de Delaunay como Procesamiento Previo para Extractores Difusos

    Directory of Open Access Journals (Sweden)

    Manuel Ramírez Flores

    2014-01-01

    Full Text Available La información biométrica que se extrae de las huellas dactilares tiende a ser diferente en cada adquisición, dada la incertidumbre existente en las mediciones y la presencia de ruido en las muestras, lo cual puede ocasionar que las palabras código generadas dentro de un extractor difuso posean un número de errores tal que rebase la capacidad de corrección de la codificación. Como consecuencia se tiene que lo anterior puede ocasionar que las huellas dactilares de una misma persona sean catalogadas como no coincidentes en su verificación o bien, que huellas de individuos diferentes parezcan demasiado similares. Para mitigar los efectos antes mencionados y sobrepasar las dificultades del pre-alineamiento de huellas dactilares, se propuso el uso de triángulos de Delaunay, lo cual permite proveer de estabilidad estructural local a la representación espacial de la información biométrica. En esa propuesta, las minucias de la huella son utilizadas como vértices de las triangulaciones y la red formada por éstas es tolerante a distorsiones, rotaciones y traslaciones. Sin embargo, en dicha propuesta se considera a la dispersión de minucias de huellas dactilares como no degenerativa y por tanto no se mencionan los umbrales o criterios necesarios para la formación de dichas triangulaciones, lo cual repercute en el desempeño de los extractores difusos. Con base en ello, este artículo presenta los resultados obtenidos al probar la formación de triangulaciones de Delaunay en imágenes de huella dactilar, en donde se aplican umbrales y criterios geométricos para luego contabilizar los triángulos coincidentes entre las estructuras formadas y definir los umbrales que maximicen dichas coincidencias.

  8. Exploring Torus Universes in Causal Dynamical Triangulations

    DEFF Research Database (Denmark)

    Budd, Timothy George; Loll, R.

    2013-01-01

    Motivated by the search for new observables in nonperturbative quantum gravity, we consider Causal Dynamical Triangulations (CDT) in 2+1 dimensions with the spatial topology of a torus. This system is of particular interest, because one can study not only the global scale factor, but also global...... shape variables in the presence of arbitrary quantum fluctuations of the geometry. Our initial investigation focusses on the dynamics of the scale factor and uncovers a qualitatively new behaviour, which leads us to investigate a novel type of boundary conditions for the path integral. Comparing large....... Apart from setting the stage for the analysis of shape dynamics on the torus, the new set-up highlights the role of nontrivial boundaries and topology....

  9. Measuring and Controlling Fairness of Triangulations

    KAUST Repository

    Jiang, Caigui

    2016-09-30

    The fairness of meshes that represent geometric shapes is a topic that has been studied extensively and thoroughly. However, the focus in such considerations often is not on the mesh itself, but rather on the smooth surface approximated by it, and fairness essentially expresses a mesh’s suitability for purposes such as visualization or simulation. This paper focusses on meshes in the architectural context, where vertices, edges, and faces of meshes are often highly visible, and any notion of fairness must take new aspects into account. We use concepts from discrete differential geometry (star-shaped Gauss images) to express fairness, and we also demonstrate how fairness can be incorporated into interactive geometric design of triangulated freeform skins.

  10. Employee-satisfaction: A triangulation approach

    Directory of Open Access Journals (Sweden)

    P. J. Visser

    1997-06-01

    Full Text Available The research on employee-satisfaction was conducted in the manufacturing industry. The sample consisted of 543 employees. The methodology could be described as a "triangulation approach" where a combination of quantitative and qualitative measurements were utilised and the results of both types of measurement integrated in the study of the construct. The research confirms existing findings that although the measurement of dimensions such as equitable rewards, working conditions, supportive colleagues, job content, etc. yield results on the level of employee-satisfaction, a single question, namely, "How satisfied are you with your job?" compares favourably with the general index. The findings also suggest the advantage of complimenting the quantitative data with qualitative information. The conclusions confirm the value of a qualitative method in cross-cultural research in an African environment. Opsomming Die navorsing omtrent werknemerstevredenheid is uitgevoer in die vervaardigingsbedryf. Die steekproef het bestaan uit 543 werknemers. Die metode van ondersoek kan beskryf word as 'n "driekantige benadering" (triangulation approach waar daar van kwantitatiewe en kwalitatiewe meting gebruik gemaak is en die resultate geihtegreer is in die bestudering van die konstruk. Die navorsing bevestig bestaande bevindinge dat die meting van dimensies soos vergelykbare belonings, werkstoestande, ondersteunende kollegas, inhoud van werk, ens. resultate lewer rakende die vlak van werknemerstevredenheid, 'n enkel vraag, naamlik, "Hoe tevrede is jy met jou werk?" gunstig vergelyk met die algemene indeks. Die bevindinge dui ook op die voordele van 'n benadering waar die kwantitatiewe data gekomplimenteer word deur kwalitatiewe inligting soos verkry uit individuele onderhoude. Die gevolgtrekkings bevestig die waarde wat die kwalitatiewe navorsingsmetode inhou vir kruis-kulturele navorsing in 'n Afrika konteks.

  11. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  12. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    Science.gov (United States)

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  13. Generation of triangulated random surfaces by means of the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analogue of the Polyakov string is considered in the work. An algorithm is proposed which enables one to study the model by means of the Monte Carlo method in the grand canonical ensemble. Preliminary results are presented on the evaluation of the critical index γ

  14. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  15. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    Science.gov (United States)

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  16. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  17. From causal dynamical triangulations to astronomical observations

    Science.gov (United States)

    Mielczarek, Jakub

    2017-09-01

    This letter discusses phenomenological aspects of dimensional reduction predicted by the Causal Dynamical Triangulations (CDT) approach to quantum gravity. The deformed form of the dispersion relation for the fields defined on the CDT space-time is reconstructed. Using the Fermi satellite observations of the GRB 090510 source we find that the energy scale of the dimensional reduction is E* > 0.7 \\sqrt{4-d\\text{UV}} \\cdot 1010 \\text{GeV} at (95% CL), where d\\text{UV} is the value of the spectral dimension in the UV limit. By applying the deformed dispersion relation to the cosmological perturbations it is shown that, for a scenario when the primordial perturbations are formed in the UV region, the scalar power spectrum PS \\propto kn_S-1 , where n_S-1≈ \\frac{3 r (d\\text{UV}-2)}{(d\\text{UV}-1)r-48} . Here, r is the tensor-to-scalar ratio. We find that within the considered model, the predicted from CDT deviation from the scale invariance (n_S=1) is in contradiction with the up to date Planck and BICEP2.

  18. Strongly minimal triangulations of (S × S )#3 and (S S

    Indian Academy of Sciences (India)

    2011) 986–995). We show that there are exactly 12 such triangulations up to isomorphism, 10 of which are orientable. Keywords. Stacked sphere; tight neighbourly triangulation; minimal triangulation. 2000 Mathematics Subject Classification.

  19. Looseness and Independence Number of Triangulations on Closed Surfaces

    Directory of Open Access Journals (Sweden)

    Nakamoto Atsuhiro

    2016-08-01

    Full Text Available The looseness of a triangulation G on a closed surface F2, denoted by ξ (G, is defined as the minimum number k such that for any surjection c : V (G → {1, 2, . . . , k + 3}, there is a face uvw of G with c(u, c(v and c(w all distinct. We shall bound ξ (G for triangulations G on closed surfaces by the independence number of G denoted by α(G. In particular, for a triangulation G on the sphere, we have

  20. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  1. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  2. TRIANGULATION OF THE INTERSTELLAR MAGNETIC FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Schwadron, N. A.; Moebius, E. [University of New Hampshire, Durham, NH 03824 (United States); Richardson, J. D. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Burlaga, L. F. [Goddard Space Flight Center, Greenbelt, MD 20771 (United States); McComas, D. J. [Southwest Research Institute, San Antonio, TX 78228 (United States)

    2015-11-01

    Determining the direction of the local interstellar magnetic field (LISMF) is important for understanding the heliosphere’s global structure, the properties of the interstellar medium, and the propagation of cosmic rays in the local galactic medium. Measurements of interstellar neutral atoms by Ulysses for He and by SOHO/SWAN for H provided some of the first observational insights into the LISMF direction. Because secondary neutral H is partially deflected by the interstellar flow in the outer heliosheath and this deflection is influenced by the LISMF, the relative deflection of H versus He provides a plane—the so-called B–V plane in which the LISMF direction should lie. Interstellar Boundary Explorer (IBEX) subsequently discovered a ribbon, the center of which is conjectured to be the LISMF direction. The most recent He velocity measurements from IBEX and those from Ulysses yield a B–V plane with uncertainty limits that contain the centers of the IBEX ribbon at 0.7–2.7 keV. The possibility that Voyager 1 has moved into the outer heliosheath now suggests that Voyager 1's direct observations provide another independent determination of the LISMF. We show that LISMF direction measured by Voyager 1 is >40° off from the IBEX ribbon center and the B–V plane. Taking into account the temporal gradient of the field direction measured by Voyager 1, we extrapolate to a field direction that passes directly through the IBEX ribbon center (0.7–2.7 keV) and the B–V plane, allowing us to triangulate the LISMF direction and estimate the gradient scale size of the magnetic field.

  3. Reconstructing Surface Triangulations by Their Intersection Matrices 26 September 2014

    Directory of Open Access Journals (Sweden)

    Arocha Jorge L.

    2015-08-01

    Full Text Available The intersection matrix of a simplicial complex has entries equal to the rank of the intersecction of its facets. We prove that this matrix is enough to define up to isomorphism a triangulation of a surface.

  4. The ising model on the dynamical triangulated random surface

    International Nuclear Information System (INIS)

    Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.

    1990-01-01

    The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions

  5. Aerial Triangulation Close-range Images with Dual Quaternion

    Directory of Open Access Journals (Sweden)

    SHENG Qinghong

    2015-05-01

    Full Text Available A new method for the aerial triangulation of close-range images based on dual quaternion is presented. Using dual quaternion to represent the spiral screw motion of the beam in the space, the real part of dual quaternion represents the angular elements of all the beams in the close-range area networks, the real part and the dual part of dual quaternion represents the line elements corporately. Finally, an aerial triangulation adjustment model based on dual quaternion is established, and the elements of interior orientation and exterior orientation and the object coordinates of the ground points are calculated. Real images and large attitude angle simulated images are selected to run the experiments of aerial triangulation. The experimental results show that the new method for the aerial triangulation of close-range images based on dual quaternion can obtain higher accuracy.

  6. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  7. A TQFT of Tuarev-Viro type on shaped triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Kashaev, Rinat [Geneva Univ. (Switzerland); Luo, Feng [Rutgers Univ., Piscataway, NJ (United States). Dept. of Mathematics; Vartanov, Grigory [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-10-15

    A shaped triangulation is a finite triangulation of an oriented pseudo three manifold where each tetrahedron carries dihedral angles of an ideal hyberbolic tetrahedron. To each shaped triangulation, we associate a quantum partition function in the form of an absolutely convergent state integral which is invariant under shaped 3-2 Pachner moves and invariant with respect to shape gauge transformations generated by total dihedral angles around internal edges through the Neumann-Zagier Poisson bracket. Similarly to Turaev-Viro theory, the state variables live on edges of the triangulation but take their values on the whole real axis. The tetrahedral weight functions are composed of three hyperbolic gamma functions in a way that they enjoy a manifest tetrahedral symmetry. We conjecture that for shaped triangulations of closed 3-manifolds, our partition function is twice the absolute value squared of the partition function of Techmueller TQFT defined by Andersen and Kashaev. This is similar to the known relationship between the Turaev-Viro and the Witten-Reshetikhin-Turaev invariants of three manifolds. We also discuss interpretations of our construction in terms of three-dimensional supersymmetric field theories related to triangulated three-dimensional manifolds.

  8. A TQFT of Tuarev-Viro type on shaped triangulations

    International Nuclear Information System (INIS)

    Kashaev, Rinat; Luo, Feng

    2012-10-01

    A shaped triangulation is a finite triangulation of an oriented pseudo three manifold where each tetrahedron carries dihedral angles of an ideal hyberbolic tetrahedron. To each shaped triangulation, we associate a quantum partition function in the form of an absolutely convergent state integral which is invariant under shaped 3-2 Pachner moves and invariant with respect to shape gauge transformations generated by total dihedral angles around internal edges through the Neumann-Zagier Poisson bracket. Similarly to Turaev-Viro theory, the state variables live on edges of the triangulation but take their values on the whole real axis. The tetrahedral weight functions are composed of three hyperbolic gamma functions in a way that they enjoy a manifest tetrahedral symmetry. We conjecture that for shaped triangulations of closed 3-manifolds, our partition function is twice the absolute value squared of the partition function of Techmueller TQFT defined by Andersen and Kashaev. This is similar to the known relationship between the Turaev-Viro and the Witten-Reshetikhin-Turaev invariants of three manifolds. We also discuss interpretations of our construction in terms of three-dimensional supersymmetric field theories related to triangulated three-dimensional manifolds.

  9. SOFTWARE MODULE FOR CONSTRUCTING THE INTERSECTION OF TRIANGULATED SURFACES

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kurgansky

    2018-03-01

    Full Text Available The effective algorithm is proposed for implementing Boolean operations over triangulated surfaces, namely, disjunction, conjunction and Boolean difference, and its software implementation. The idea consists in as follow. The first step is to determine pairs of intersecting triangles: localizing the intersection of the two surfaces using the bounding volume of the parallelepipeds and the future of their intersection. The second step is constructing an intersection line for each pair of triangles: a pair of intersecting triangles is selected, and the segment along which they intersect is constructed. Further, thanks to the entered data structure, "adjacent" triangles are selected, among which are selected those that form the intersecting pair. The process described above continues as long as such triangles can be detected. After that the triangles involved in the intersection are retriangulated. For each triangle, all the edges are known on which it intersects with triangles from another surface. These edges are structural edges in the triangulation problem with constraints for a given triangle. The third step is to combine all surfaces into one surface. Further, subsurfaces are constructed along the loops of intersection limited by the found loops. Since the intersection line of the surfaces was constructed in sequence, it is possible to specify the direction of each edge. Any edge from the intersection line is selected. The triangle is added to the subsurface under construction, which includes this edge and its orientation is the same as the direction of the edge. The edge which was selected previously is deleted from intersection line, but two new edges are added is the remaining edges of added triangle. The third step is to combine all surfaces into one surface. Further, subsurfaces are constructed along the cycles of intersection limited by the found cycles. Since the intersection line of the surfaces was constructed in sequence, it is

  10. THE DEEP2 GALAXY REDSHIFT SURVEY: THE VORONOI-DELAUNAY METHOD CATALOG OF GALAXY GROUPS

    Energy Technology Data Exchange (ETDEWEB)

    Gerke, Brian F. [KIPAC, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS 29, Menlo Park, CA 94725 (United States); Newman, Jeffrey A. [Department of Physics and Astronomy, 3941 O' Hara Street, Pittsburgh, PA 15260 (United States); Davis, Marc [Department of Physics and Department of Astronomy, Campbell Hall, University of California-Berkeley, Berkeley, CA 94720 (United States); Coil, Alison L. [Center for Astrophysics and Space Sciences, University of California, San Diego, 9500 Gilman Drive, MC 0424, La Jolla, CA 92093 (United States); Cooper, Michael C. [Center for Galaxy Evolution, Department of Physics and Astronomy, University of California-Irvine, Irvine, CA 92697 (United States); Dutton, Aaron A. [Department of Physics and Astronomy, University of Victoria, Victoria, BC V8P 5C2 (Canada); Faber, S. M.; Guhathakurta, Puragra; Koo, David C.; Phillips, Andrew C. [UCO/Lick Observatory, University of California-Santa Cruz, Santa Cruz, CA 95064 (United States); Konidaris, Nicholas; Lin, Lihwai [Astronomy Department, Caltech 249-17, Pasadena, CA 91125 (United States); Noeske, Kai [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Rosario, David J. [Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, 85748 Garching bei Muenchen (Germany); Weiner, Benjamin J.; Willmer, Christopher N. A. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Yan, Renbin [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada)

    2012-05-20

    We present a public catalog of galaxy groups constructed from the spectroscopic sample of galaxies in the fourth data release from the Deep Extragalactic Evolutionary Probe 2 (DEEP2) Galaxy Redshift Survey, including the Extended Groth Strip (EGS). The catalog contains 1165 groups with two or more members in the EGS over the redshift range 0 < z < 1.5 and 1295 groups at z > 0.6 in the rest of DEEP2. Twenty-five percent of EGS galaxies and fourteen percent of high-z DEEP2 galaxies are assigned to galaxy groups. The groups were detected using the Voronoi-Delaunay method (VDM) after it has been optimized on mock DEEP2 catalogs following similar methods to those employed in Gerke et al. In the optimization effort, we have taken particular care to ensure that the mock catalogs resemble the data as closely as possible, and we have fine-tuned our methods separately on mocks constructed for the EGS and the rest of DEEP2. We have also probed the effect of the assumed cosmology on our inferred group-finding efficiency by performing our optimization on three different mock catalogs with different background cosmologies, finding large differences in the group-finding success we can achieve for these different mocks. Using the mock catalog whose background cosmology is most consistent with current data, we estimate that the DEEP2 group catalog is 72% complete and 61% pure (74% and 67% for the EGS) and that the group finder correctly classifies 70% of galaxies that truly belong to groups, with an additional 46% of interloper galaxies contaminating the catalog (66% and 43% for the EGS). We also confirm that the VDM catalog reconstructs the abundance of galaxy groups with velocity dispersions above {approx}300 km s{sup -1} to an accuracy better than the sample variance, and this successful reconstruction is not strongly dependent on cosmology. This makes the DEEP2 group catalog a promising probe of the growth of cosmic structure that can potentially be used for cosmological tests.

  11. THE DEEP2 GALAXY REDSHIFT SURVEY: THE VORONOI-DELAUNAY METHOD CATALOG OF GALAXY GROUPS

    International Nuclear Information System (INIS)

    Gerke, Brian F.; Newman, Jeffrey A.; Davis, Marc; Coil, Alison L.; Cooper, Michael C.; Dutton, Aaron A.; Faber, S. M.; Guhathakurta, Puragra; Koo, David C.; Phillips, Andrew C.; Konidaris, Nicholas; Lin, Lihwai; Noeske, Kai; Rosario, David J.; Weiner, Benjamin J.; Willmer, Christopher N. A.; Yan, Renbin

    2012-01-01

    We present a public catalog of galaxy groups constructed from the spectroscopic sample of galaxies in the fourth data release from the Deep Extragalactic Evolutionary Probe 2 (DEEP2) Galaxy Redshift Survey, including the Extended Groth Strip (EGS). The catalog contains 1165 groups with two or more members in the EGS over the redshift range 0 0.6 in the rest of DEEP2. Twenty-five percent of EGS galaxies and fourteen percent of high-z DEEP2 galaxies are assigned to galaxy groups. The groups were detected using the Voronoi-Delaunay method (VDM) after it has been optimized on mock DEEP2 catalogs following similar methods to those employed in Gerke et al. In the optimization effort, we have taken particular care to ensure that the mock catalogs resemble the data as closely as possible, and we have fine-tuned our methods separately on mocks constructed for the EGS and the rest of DEEP2. We have also probed the effect of the assumed cosmology on our inferred group-finding efficiency by performing our optimization on three different mock catalogs with different background cosmologies, finding large differences in the group-finding success we can achieve for these different mocks. Using the mock catalog whose background cosmology is most consistent with current data, we estimate that the DEEP2 group catalog is 72% complete and 61% pure (74% and 67% for the EGS) and that the group finder correctly classifies 70% of galaxies that truly belong to groups, with an additional 46% of interloper galaxies contaminating the catalog (66% and 43% for the EGS). We also confirm that the VDM catalog reconstructs the abundance of galaxy groups with velocity dispersions above ∼300 km s –1 to an accuracy better than the sample variance, and this successful reconstruction is not strongly dependent on cosmology. This makes the DEEP2 group catalog a promising probe of the growth of cosmic structure that can potentially be used for cosmological tests.

  12. GENUS STATISTICS USING THE DELAUNAY TESSELLATION FIELD ESTIMATION METHOD. I. TESTS WITH THE MILLENNIUM SIMULATION AND THE SDSS DR7

    International Nuclear Information System (INIS)

    Zhang Youcai; Yang Xiaohu; Springel, Volker

    2010-01-01

    We study the topology of cosmic large-scale structure through the genus statistics, using galaxy catalogs generated from the Millennium Simulation and observational data from the latest Sloan Digital Sky Survey Data Release (SDSS DR7). We introduce a new method for constructing galaxy density fields and for measuring the genus statistics of its isodensity surfaces. It is based on a Delaunay tessellation field estimation (DTFE) technique that allows the definition of a piece-wise continuous density field and the exact computation of the topology of its polygonal isodensity contours, without introducing any free numerical parameter. Besides this new approach, we also employ the traditional approaches of smoothing the galaxy distribution with a Gaussian of fixed width, or by adaptively smoothing with a kernel that encloses a constant number of neighboring galaxies. Our results show that the Delaunay-based method extracts the largest amount of topological information. Unlike the traditional approach for genus statistics, it is able to discriminate between the different theoretical galaxy catalogs analyzed here, both in real space and in redshift space, even though they are based on the same underlying simulation model. In particular, the DTFE approach detects with high confidence a discrepancy of one of the semi-analytic models studied here compared with the SDSS data, while the other models are found to be consistent.

  13. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  14. Internet information triangulation: Design theory and prototype evaluation

    NARCIS (Netherlands)

    Wijnhoven, Alphonsus B.J.M.; Brinkhuis, Michel

    2014-01-01

    Many discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory—consisting of a

  15. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    OpenAIRE

    Bi, Qiao; Guo, Liu; Ruda, H. E.

    2010-01-01

    A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  16. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    Directory of Open Access Journals (Sweden)

    Qiao Bi

    2010-01-01

    Full Text Available A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  17. Triangulating' AMPATH: Demonstration of a multi-perspective ...

    African Journals Online (AJOL)

    For strategic planning, the Kenyan HIV/AIDS programme AMPATH (Academic Model Providing Access to Healthcare) sought to evaluate its performance in 2006. The method used for this evaluation was termed 'triangulation,' because it used information from three different sources – patients, communities, and programme ...

  18. Path integral measure and triangulation independence in discrete gravity

    Science.gov (United States)

    Dittrich, Bianca; Steinhaus, Sebastian

    2012-02-01

    A path integral measure for gravity should also preserve the fundamental symmetry of general relativity, which is diffeomorphism symmetry. In previous work, we argued that a successful implementation of this symmetry into discrete quantum gravity models would imply discretization independence. We therefore consider the requirement of triangulation independence for the measure in (linearized) Regge calculus, which is a discrete model for quantum gravity, appearing in the semi-classical limit of spin foam models. To this end we develop a technique to evaluate the linearized Regge action associated to Pachner moves in 3D and 4D and show that it has a simple, factorized structure. We succeed in finding a local measure for 3D (linearized) Regge calculus that leads to triangulation independence. This measure factor coincides with the asymptotics of the Ponzano Regge Model, a 3D spin foam model for gravity. We furthermore discuss to which extent one can find a triangulation independent measure for 4D Regge calculus and how such a measure would be related to a quantum model for 4D flat space. To this end, we also determine the dependence of classical Regge calculus on the choice of triangulation in 3D and 4D.

  19. Quantum gravity from simplices: analytical investigations of causal dynamical triangulations

    NARCIS (Netherlands)

    Benedetti, D.

    2007-01-01

    A potentially powerful approach to quantum gravity has been developed over the last few years under the name of Causal Dynamical Triangulations. Although these models can be solved exactly in a variety of ways in the case of pure gravity in (1+1) dimensions,it is difficult to extend any of the

  20. Putting a cap on causality violations in causal dynamical triangulations

    International Nuclear Information System (INIS)

    Ambjoern, Jan; Loll, Renate; Westra, Willem; Zohren, Stefan

    2007-01-01

    The formalism of causal dynamical triangulations (CDT) provides us with a non-perturbatively defined model of quantum gravity, where the sum over histories includes only causal space-time histories. Path integrals of CDT and their continuum limits have been studied in two, three and four dimensions. Here we investigate a generalization of the two-dimensional CDT model, where the causality constraint is partially lifted by introducing branching points with a weight g s , and demonstrate that the system can be solved analytically in the genus-zero sector. The solution is analytic in a neighborhood around weight g s = 0 and cannot be analytically continued to g s = ∞, where the branching is entirely geometric and where one would formally recover standard Euclidean two-dimensional quantum gravity defined via dynamical triangulations or Liouville theory

  1. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  2. Quantum triangulations. Moduli spaces, strings, and quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Carfora, Mauro; Marzouli, Annalisa [Univ. degli Studi di Pavia (Italy). Dipt. Fisica Nucleare e Teorica; Istituto Nazionale di Fisica Nucleare e Teorica, Pavia (Italy)

    2012-07-01

    Research on polyhedral manifolds often points to unexpected connections between very distinct aspects of Mathematics and Physics. In particular triangulated manifolds play quite a distinguished role in such settings as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, and critical phenomena. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is rather often a consequence of an underlying structure which naturally calls into play non-trivial aspects of representation theory, of complex analysis and topology in a way which makes manifest the basic geometric structures of the physical interactions involved. Yet, in most of the existing literature, triangulated manifolds are still merely viewed as a convenient discretization of a given physical theory to make it more amenable for numerical treatment. The motivation for these lectures notes is thus to provide an approachable introduction to this topic, emphasizing the conceptual aspects, and probing, through a set of cases studies, the connection between triangulated manifolds and quantum physics to the deepest. This volume addresses applied mathematicians and theoretical physicists working in the field of quantum geometry and its applications. (orig.)

  3. Summations over equilaterally triangulated surfaces and the critical string measure

    International Nuclear Information System (INIS)

    Smit, D.J.; Lawrence Berkeley Lab., CA

    1992-01-01

    We propose a new approach to the summation over dynamically triangulated Riemann surfaces which does not rely on properties of the potential in a matrix model. Instead, we formulate a purely algebraic discretization of critical string path integral. This is combined with a technique which assigns to each equilateral triangulation of a two-dimensional surface a Riemann surface defined over a certain finite extension of the field of rational numbers, i.e. an arithmetic surface. Thus we establish a new formulated in which the sum over randomly triangulated surfaces defines an invariant measure on the moduli space of arithmetic surfaces. It is shown that because of this it is far from obvious that this measure for large genera approximates the measure defined by the continuum theory, i.e. Liouville theory or critical string theory. In low genus this subtlety does not exist. In the case of critical string theory we explicitly compute the volume of the moduli space of arithmetic surfaces in terms of the modular height function and show that for low genus it approximates correctly the continuum measure. We also discuss a continuum limit which bears some resemblance with a double scaling limit in matrix models. (orig.)

  4. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Science.gov (United States)

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-01-01

    In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  5. A flocking algorithm for multi-agent systems with connectivity preservation under hybrid metric-topological interactions.

    Directory of Open Access Journals (Sweden)

    Chenlong He

    Full Text Available In this paper, we propose a connectivity-preserving flocking algorithm for multi-agent systems in which the neighbor set of each agent is determined by the hybrid metric-topological distance so that the interaction topology can be represented as the range-limited Delaunay graph, which combines the properties of the commonly used disk graph and Delaunay graph. As a result, the proposed flocking algorithm has the following advantages over the existing ones. First, range-limited Delaunay graph is sparser than the disk graph so that the information exchange among agents is reduced significantly. Second, some links irrelevant to the connectivity can be dynamically deleted during the evolution of the system. Thus, the proposed flocking algorithm is more flexible than existing algorithms, where links are not allowed to be disconnected once they are created. Finally, the multi-agent system spontaneously generates a regular quasi-lattice formation without imposing the constraint on the ratio of the sensing range of the agent to the desired distance between two adjacent agents. With the interaction topology induced by the hybrid distance, the proposed flocking algorithm can still be implemented in a distributed manner. We prove that the proposed flocking algorithm can steer the multi-agent system to a stable flocking motion, provided the initial interaction topology of multi-agent systems is connected and the hysteresis in link addition is smaller than a derived upper bound. The correctness and effectiveness of the proposed algorithm are verified by extensive numerical simulations, where the flocking algorithms based on the disk and Delaunay graph are compared.

  6. Laser triangulation method for measuring the size of parking claw

    Science.gov (United States)

    Liu, Bo; Zhang, Ming; Pang, Ying

    2017-10-01

    With the development of science and technology and the maturity of measurement technology, the 3D profile measurement technology has been developed rapidly. Three dimensional measurement technology is widely used in mold manufacturing, industrial inspection, automatic processing and manufacturing, etc. There are many kinds of situations in scientific research and industrial production. It is necessary to transform the original mechanical parts into the 3D data model on the computer quickly and accurately. At present, many methods have been developed to measure the contour size, laser triangulation is one of the most widely used methods.

  7. Optimizing 3D Triangulations to Recapture Sharp Edges

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2006-01-01

    In this report, a technique for optimizing 3D triangulations is proposed. The method seeks to minimize an energy defined as a sum of energy terms for each edge in a triangle mesh. The main contribution is a novel per edge energy which strikes a balance between penalizing dihedral angle yet allowing...... sharp edges. The energy is minimized using edge swapping, and this can be done either in a greedy fashion or using simulated annealing. The latter is more costly, but effectively avoids local minima. The method has been used on a number of models. Particularly good results have been obtained on digital...

  8. Three-Dimensional Reconstruction Optical System Using Shadows Triangulation

    Science.gov (United States)

    Barba, J. Leiner; Vargas, Q. Lorena; Torres, M. Cesar; Mattos, V. Lorenzo

    2008-04-01

    In this work is developed a three-dimensional reconstruction system using the Shades3D tool of the Matlab® Programming Language and materials of low cost, such as webcam camera, a stick, a weak structured lighting system composed by a desk lamp, and observation plane in which the object is located. The reconstruction is obtained through a triangulation process that is executed after acquiring a sequence of images of the scene with a shadow projected on the object; additionally an image filtering process is done for obtaining only the part of the scene that will be reconstructed. Previously, it is necessary to develop a calibration process for determining the internal camera geometric and optical characteristics (intrinsic parameters), and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters). The lamp and the stick are used to produce a shadow which scans the object; in this technique, it is not necessary to know the position of the light source, instead the triangulation is obtained using shadow plane produced by intersection between the stick and the illumination pattern. The webcam camera captures all images with the shadow scanning the object, and Shades3D tool processes all information taking into account captured images and calibration parameters. Likewise, this technique is evaluated in the reconstruction of parts of the human body and its application in the detection of external abnormalities and elaboration of prosthesis or implant.

  9. Triangulation-based edge measurement using polyview optics

    Science.gov (United States)

    Li, Yinan; Kästner, Markus; Reithmeier, Eduard

    2018-04-01

    Laser triangulation sensors as non-contact measurement devices are widely used in industry and research for profile measurements and quantitative inspections. Some technical applications e.g. edge measurements usually require a configuration of a single sensor and a translation stage or a configuration of multiple sensors, so that they can measure a large measurement range that is out of the scope of a single sensor. However, the cost of both configurations is high, due to the additional rotational axis or additional sensor. This paper provides a special measurement system for measurement of great curved surfaces based on a single sensor configuration. Utilizing a self-designed polyview optics and calibration process, the proposed measurement system allows an over 180° FOV (field of view) with a precise measurement accuracy as well as an advantage of low cost. The detailed capability of this measurement system based on experimental data is discussed in this paper.

  10. Technique Triangulation for Validation in Directed Content Analysis

    Directory of Open Access Journals (Sweden)

    Áine M. Humble PhD

    2009-09-01

    Full Text Available Division of labor in wedding planning varies for first-time marriages, with three types of couples—traditional, transitional, and egalitarian—identified, but nothing is known about wedding planning for remarrying individuals. Using semistructured interviews, the author interviewed 14 couples in which at least one person had remarried and used directed content analysis to investigate the extent to which the aforementioned typology could be transferred to this different context. In this paper she describes how a triangulation of analytic techniques provided validation for couple classifications and also helped with moving beyond “blind spots” in data analysis. Analytic approaches were the constant comparative technique, rank order comparison, and visual representation of coding, using MAXQDA 2007's tool called TextPortraits.

  11. On-Line Metrology with Conoscopic Holography: Beyond Triangulation

    Directory of Open Access Journals (Sweden)

    Ignacio Álvarez

    2009-09-01

    Full Text Available On-line non-contact surface inspection with high precision is still an open problem. Laser triangulation techniques are the most common solution for this kind of systems, but there exist fundamental limitations to their applicability when high precisions, long standoffs or large apertures are needed, and when there are difficult operating conditions. Other methods are, in general, not applicable in hostile environments or inadequate for on-line measurement. In this paper we review the latest research in Conoscopic Holography, an interferometric technique that has been applied successfully in this kind of applications, ranging from submicrometric roughness measurements, to long standoff sensors for surface defect detection in steel at high temperatures.

  12. The chromatic class and the chromatic number of the planar conjugated triangulation

    OpenAIRE

    Malinina, Natalia

    2013-01-01

    This material is dedicated to the estimation of the chromatic number and chromatic class of the conjugated triangulation (first conversion) and also of the second conversion of the planar triangulation. Also this paper introduces some new hypotheses, which are equivalent to Four Color Problem.

  13. A REST Service for Triangulation of Point Sets Using Oriented Matroids

    Directory of Open Access Journals (Sweden)

    José Antonio Valero Medina

    2014-05-01

    Full Text Available This paper describes the implementation of a prototype REST service for triangulation of point sets collected by mobile GPS receivers. The first objective of this paper is to test functionalities of an application, which exploits mobile devices’ capabilities to get data associated with their spatial location. A triangulation of a set of points provides a mechanism through which it is possible to produce an accurate representation of spatial data. Such triangulation may be used for representing surfaces by Triangulated Irregular Networks (TINs, and for decomposing complex two-dimensional spatial objects into simpler geometries. The second objective of this paper is to promote the use of oriented matroids for finding alternative solutions to spatial data processing and analysis tasks. This study focused on the particular case of the calculation of triangulations based on oriented matroids. The prototype described in this paper used a wrapper to integrate and expose several tools previously implemented in C++.

  14. Branches of Triangulated Origami Near the Unfolded State

    Directory of Open Access Journals (Sweden)

    Bryan Gin-ge Chen

    2018-02-01

    Full Text Available Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct “branches” which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either “pop up” or “pop down.” The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a “misfolded” state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  15. Branches of Triangulated Origami Near the Unfolded State

    Science.gov (United States)

    Chen, Bryan Gin-ge; Santangelo, Christian D.

    2018-01-01

    Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct "branches" which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either "pop up" or "pop down." The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a "misfolded" state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  16. Multiomics Data Triangulation for Asthma Candidate Biomarkers and Precision Medicine.

    Science.gov (United States)

    Pecak, Matija; Korošec, Peter; Kunej, Tanja

    2018-06-01

    Asthma is a common complex disorder and has been subject to intensive omics research for disease susceptibility and therapeutic innovation. Candidate biomarkers of asthma and its precision treatment demand that they stand the test of multiomics data triangulation before they can be prioritized for clinical applications. We classified the biomarkers of asthma after a search of the literature and based on whether or not a given biomarker candidate is reported in multiple omics platforms and methodologies, using PubMed and Web of Science, we identified omics studies of asthma conducted on diverse platforms using keywords, such as asthma, genomics, metabolomics, and epigenomics. We extracted data about asthma candidate biomarkers from 73 articles and developed a catalog of 190 potential asthma biomarkers (167 human, 23 animal data), comprising DNA loci, transcripts, proteins, metabolites, epimutations, and noncoding RNAs. The data were sorted according to 13 omics types: genomics, epigenomics, transcriptomics, proteomics, interactomics, metabolomics, ncRNAomics, glycomics, lipidomics, environmental omics, pharmacogenomics, phenomics, and integrative omics. Importantly, we found that 10 candidate biomarkers were apparent in at least two or more omics levels, thus promising potential for further biomarker research and development and precision medicine applications. This multiomics catalog reported herein for the first time contributes to future decision-making on prioritization of biomarkers and validation efforts for precision medicine in asthma. The findings may also facilitate meta-analyses and integrative omics studies in the future.

  17. Simultaneous hierarchical segmentation and vectorization of satellite images through combined data sampling and anisotropic triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Prasad, Lakshman [Los Alamos National Laboratory; Dillard, Scott [PNNL

    2010-10-21

    The automatic detection, recognition , and segmentation of object classes in remote sensed images is of crucial importance for scene interpretation and understanding. However, it is a difficult task because of the high variability of satellite data. Indeed, the observed scenes usually exhibit a high degree of complexity, where complexity refers to the large variety of pictorial representations of objects with the same semantic meaning and also to the extensive amount of available det.ails. Therefore, there is still a strong demand for robust techniques for automatic information extraction and interpretation of satellite images. In parallel, there is a growing interest in techniques that can extract vector features directly from such imagery. In this paper, we investigate the problem of automatic hierarchical segmentation and vectorization of multispectral satellite images. We propose a new algorithm composed of the following steps: (i) a non-uniform sampling scheme extracting most salient pixels in the image, (ii) an anisotropic triangulation constrained by the sampled pixels taking into account both strength and directionality of local structures present in the image, (iii) a polygonal grouping scheme merging, through techniques based on perceptual information , the obtained segments to a smaller quantity of superior vectorial objects. Besides its computational efficiency, this approach provides a meaningful polygonal representation for subsequent image analysis and/or interpretation.

  18. Euclidean Dynamical Triangulation revisited: is the phase transition really 1st order?

    International Nuclear Information System (INIS)

    Rindlisbacher, Tobias; Forcrand, Philippe de

    2015-01-01

    The transition between the two phases of 4D Euclidean Dynamical Triangulation (http://dx.doi.org/10.1016/0370-2693(92)90709-D) was long believed to be of second order until in 1996 first order behavior was found for sufficiently large systems (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4). However, one may wonder if this finding was affected by the numerical methods used: to control volume fluctuations, in both studies (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4) an artificial harmonic potential was added to the action and in (http://dx.doi.org/10.1016/S0370-2693(96)01277-4) measurements were taken after a fixed number of accepted instead of attempted moves which introduces an additional error. Finally the simulations suffer from strong critical slowing down which may have been underestimated. In the present work, we address the above weaknesses: we allow the volume to fluctuate freely within a fixed interval; we take measurements after a fixed number of attempted moves; and we overcome critical slowing down by using an optimized parallel tempering algorithm (http://dx.doi.org/10.1088/1742-5468/2010/01/P01020). With these improved methods, on systems of size up to N_4=64k 4-simplices, we confirm that the phase transition is 1"s"t order. In addition, we discuss a local criterion to decide whether parts of a triangulation are in the elongated or crumpled state and describe a new correspondence between EDT and the balls in boxes model. The latter gives rise to a modified partition function with an additional, third coupling. Finally, we propose and motivate a class of modified path-integral measures that might remove the metastability of the Markov chain and turn the phase transition into 2"n"d order.

  19. Flattening of the electrocardiographic T-wave is a sign of proarrhythmic risk and a reflection of action potential triangulation

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer Ahmed; Graff, Claus; Kanters, J.K.

    2013-01-01

    Drug-induced triangulation of the cardiac action potential is associated with increased risk of arrhythmic events. It has been suggested that triangulation causes a flattening of the electrocardiographic T-wave but the relationship between triangulation, T-wave flattening and onset of arrhythmia ...

  20. The Triangulation Algorithmic: A Transformative Function for Designing and Deploying Effective Educational Technology Assessment Instruments

    Science.gov (United States)

    Osler, James Edward

    2013-01-01

    This paper discusses the implementation of the Tri-Squared Test as an advanced statistical measure used to verify and validate the research outcomes of Educational Technology software. A mathematical and epistemological rational is provided for the transformative process of qualitative data into quantitative outcomes through the Tri-Squared Test…

  1. A study on the effect of different image centres on stereo triangulation accuracy

    CSIR Research Space (South Africa)

    De Villiers, J

    2015-11-01

    Full Text Available This paper evaluates the effect of mixing the distortion centre, principal point and arithmetic image centre on the distortion correction, focal length determination and resulting real-world stereo vision triangulation. A robotic arm is used...

  2. Ising model of a randomly triangulated random surface as a definition of fermionic string theory

    International Nuclear Information System (INIS)

    Bershadsky, M.A.; Migdal, A.A.

    1986-01-01

    Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)

  3. Exact computation of the Voronoi Diagram of spheres in 3D, its topology and its geometric invariants

    DEFF Research Database (Denmark)

    Anton, François; Mioc, Darka; Santos, Marcelo

    2011-01-01

    In this paper, we are addressing the exact computation of the Delaunay graph (or quasi-triangulation) and the Voronoi diagram of spheres using Wu’s algorithm. Our main contribution is first a methodology for automated derivation of invariants of the Delaunay empty circumcircle predicate for spheres...... and the Voronoi vertex of four spheres, then the application of this methodology to get all geometrical invariants that intervene in this problem and the exact computation of the Delaunay graph and the Voronoi diagram of spheres. To the best of our knowledge, there does not exist a comprehensive treatment...... of the exact computation with geometrical invariants of the Delaunay graph and the Voronoi diagram of spheres. Starting from the system of equations defining the zero-dimensional algebraic set of the problem, we are following Wu’s algorithm to transform the initial system into an equivalent Wu characteristic...

  4. Drug repurposing by integrated literature mining and drug–gene–disease triangulation

    DEFF Research Database (Denmark)

    Sun, Peng; Guo, Jiong; Winnenburg, Rainer

    2017-01-01

    recent developments in computational drug repositioning and introduce the utilized data sources. Afterwards, we introduce a new data fusion model based on n-cluster editing as a novel multi-source triangulation strategy, which was further combined with semantic literature mining. Our evaluation suggests...... that utilizing drug–gene–disease triangulation coupled to sophisticated text analysis is a robust approach for identifying new drug candidates for repurposing....

  5. Mesh Generation via Local Bisection Refinement of Triangulated Grids

    Science.gov (United States)

    2015-06-01

    Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented

  6. Grey signal processing and data reconstruction in the non-diffracting beam triangulation measurement system

    Science.gov (United States)

    Meng, Hao; Wang, Zhongyu; Fu, Jihua

    2008-12-01

    The non-diffracting beam triangulation measurement system possesses the advantages of longer measurement range, higher theoretical measurement accuracy and higher resolution over the traditional laser triangulation measurement system. Unfortunately the measurement accuracy of the system is greatly degraded due to the speckle noise, the CCD photoelectric noise and the background light noise in practical applications. Hence, some effective signal processing methods must be applied to improve the measurement accuracy. In this paper a novel effective method for removing the noises in the non-diffracting beam triangulation measurement system is proposed. In the method the grey system theory is used to process and reconstruct the measurement signal. Through implementing the grey dynamic filtering based on the dynamic GM(1,1), the noises can be effectively removed from the primary measurement data and the measurement accuracy of the system can be improved as a result.

  7. Triangulation and the importance of establishing valid methods for food safety culture evaluation.

    Science.gov (United States)

    Jespersen, Lone; Wallace, Carol A

    2017-10-01

    The research evaluates maturity of food safety culture in five multi-national food companies using method triangulation, specifically self-assessment scale, performance documents, and semi-structured interviews. Weaknesses associated with each individual method are known but there are few studies in food safety where a method triangulation approach is used for both data collection and data analysis. Significantly, this research shows that individual results taken in isolation can lead to wrong conclusions, resulting in potentially failing tactics and wasted investments. However, by applying method triangulation and reviewing results from a range of culture measurement tools it is possible to better direct investments and interventions. The findings add to the food safety culture paradigm beyond a single evaluation of food safety culture using generic culture surveys. Copyright © 2017. Published by Elsevier Ltd.

  8. The use of Triangulation in Social Sciences Research : Can qualitative and quantitative methods be combined?

    Directory of Open Access Journals (Sweden)

    Ashatu Hussein

    2015-03-01

    Full Text Available This article refers to a study in Tanzania on fringe benefits or welfare via the work contract1 where we will work both quantitatively and qualitatively. My focus is on the vital issue of combining methods or methodologies. There has been mixed views on the uses of triangulation in researches. Some authors argue that triangulation is just for increasing the wider and deep understanding of the study phenomenon, while others have argued that triangulation is actually used to increase the study accuracy, in this case triangulation is one of the validity measures. Triangulation is defined as the use of multiple methods mainly qualitative and quantitative methods in studying the same phenomenon for the purpose of increasing study credibility. This implies that triangulation is the combination of two or more methodological approaches, theoretical perspectives, data sources, investigators and analysis methods to study the same phenomenon.However, using both qualitative and quantitative paradigms in the same study has resulted into debate from some researchers arguing that the two paradigms differ epistemologically and ontologically. Nevertheless, both paradigms are designed towards understanding about a particular subject area of interest and both of them have strengths and weaknesses. Thus, when combined there is a great possibility of neutralizing the flaws of one method and strengthening the benefits of the other for the better research results. Thus, to reap the benefits of two paradigms and minimizing the drawbacks of each, the combination of the two approaches have been advocated in this article. The quality of our studies on welfare to combat poverty is crucial, and especially when we want our conclusions to matter in practice.

  9. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    Science.gov (United States)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  10. Robotic tool positioning process using a multi-line off-axis laser triangulation sensor

    Science.gov (United States)

    Pinto, T. C.; Matos, G.

    2018-03-01

    Proper positioning of a friction stir welding head for pin insertion, driven by a closed chain robot, is important to ensure quality repair of cracks. A multi-line off-axis laser triangulation sensor was designed to be integrated to the robot, allowing relative measurements of the surface to be repaired. This work describes the sensor characteristics, its evaluation and the measurement process for tool positioning to a surface point of interest. The developed process uses a point of interest image and a measured point cloud to define the translation and rotation for tool positioning. Sensor evaluation and tests are described. Keywords: laser triangulation, 3D measurement, tool positioning, robotics.

  11. All roads lead to Rome - New search methods for the optimal triangulation problem

    Czech Academy of Sciences Publication Activity Database

    Ottosen, T. J.; Vomlel, Jiří

    2012-01-01

    Roč. 53, č. 9 (2012), s. 1350-1366 ISSN 0888-613X R&D Projects: GA MŠk 1M0572; GA ČR GEICC/08/E010; GA ČR GA201/09/1891 Grant - others:GA MŠk(CZ) 2C06019 Institutional support: RVO:67985556 Keywords : Bayesian networks * Optimal triangulation * Probabilistic inference * Cliques in a graph Subject RIV: BD - Theory of Information Impact factor: 1.729, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/vomlel-all roads lead to rome - new search methods for the optimal triangulation problem.pdf

  12. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  13. Theoretical triangulation as an approach for revealing the complexity of a classroom discussion

    NARCIS (Netherlands)

    van Drie, J.; Dekker, R.

    2013-01-01

    In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and

  14. An Array of Qualitative Data Analysis Tools: A Call for Data Analysis Triangulation

    Science.gov (United States)

    Leech, Nancy L.; Onwuegbuzie, Anthony J.

    2007-01-01

    One of the most important steps in the qualitative research process is analysis of data. The purpose of this article is to provide elements for understanding multiple types of qualitative data analysis techniques available and the importance of utilizing more than one type of analysis, thus utilizing data analysis triangulation, in order to…

  15. Feminist Approaches to Triangulation: Uncovering Subjugated Knowledge and Fostering Social Change in Mixed Methods Research

    Science.gov (United States)

    Hesse-Biber, Sharlene

    2012-01-01

    This article explores the deployment of triangulation in the service of uncovering subjugated knowledge and promoting social change for women and other oppressed groups. Feminist approaches to mixed methods praxis create a tight link between the research problem and the research design. An analysis of selected case studies of feminist praxis…

  16. Making the Most of Obesity Research: Developing Research and Policy Objectives through Evidence Triangulation

    Science.gov (United States)

    Oliver, Kathryn; Aicken, Catherine; Arai, Lisa

    2013-01-01

    Drawing lessons from research can help policy makers make better decisions. If a large and methodologically varied body of research exists, as with childhood obesity, this is challenging. We present new research and policy objectives for child obesity developed by triangulating user involvement data with a mapping study of interventions aimed at…

  17. The Marginalized "Model" Minority: An Empirical Examination of the Racial Triangulation of Asian Americans

    Science.gov (United States)

    Xu, Jun; Lee, Jennifer C.

    2013-01-01

    In this article, we propose a shift in race research from a one-dimensional hierarchical approach to a multidimensional system of racial stratification. Building upon Claire Kim's (1999) racial triangulation theory, we examine how the American public rates Asians relative to blacks and whites along two dimensions of racial stratification: racial…

  18. Triangulation and Mixed Methods Designs: Data Integration with New Research Technologies

    Science.gov (United States)

    Fielding, Nigel G.

    2012-01-01

    Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or "richness." This article discusses such applications in relation to new technologies for social research, looking at three…

  19. The Application of a Multiphase Triangulation Approach to Mixed Methods: The Research of an Aspiring School Principal Development Program

    Science.gov (United States)

    Youngs, Howard; Piggot-Irvine, Eileen

    2012-01-01

    Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…

  20. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds

    Science.gov (United States)

    Altschuler, Martin D.; Kassaee, Alireza

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  1. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin; Zeng, Wei; Morvan, Jean-Marie; Chen, Liming; Gu, Xianfengdavid

    2014-01-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  2. Surface meshing with curvature convergence

    KAUST Repository

    Li, Huibin

    2014-06-01

    Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm. © 2014 IEEE.

  3. Triangulation of the monophasic action potential causes flattening of the electrocardiographic T-wave

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer Ahmed; Graff, Claus; Thomsen, Morten Bækgaard

    2012-01-01

    of the action potential under the effect of the IKr blocker sertindole and associated these changes to concurrent changes in the morphology of electrocardiographic T-waves in dogs. We show that, under the effect of sertindole, the peak changes in the morphology of action potentials occur at time points similar......It has been proposed that triangulation on the cardiac action potential manifests as a broadened, more flat and notched T-wave on the ECG but to what extent such morphology characteristics are indicative of triangulation is more unclear. In this paper, we have analyzed the morphological changes...... to those observed for the peak changes in T-wave morphology on the ECG. We further show that the association between action potential shape and ECG shape is dose-dependent and most prominent at the time corresponding to phase 3 of the action potential....

  4. Quantum triangulations moduli space, quantum computing, non-linear sigma models and Ricci flow

    CERN Document Server

    Carfora, Mauro

    2017-01-01

    This book discusses key conceptual aspects and explores the connection between triangulated manifolds and quantum physics, using a set of case studies ranging from moduli space theory to quantum computing to provide an accessible introduction to this topic. Research on polyhedral manifolds often reveals unexpected connections between very distinct aspects of mathematics and physics. In particular, triangulated manifolds play an important role in settings such as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, critical phenomena and complex systems. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is also often a consequence of an underlying structure that naturally calls into play non-trivial aspects of representation theory, complex analysis and topology in a way that makes the basic geometric structures of the physical interactions involv...

  5. (2+1)-dimensional quantum gravity as the continuum limit of causal dynamical triangulations

    International Nuclear Information System (INIS)

    Benedetti, D.; Loll, R.; Zamponi, F.

    2007-01-01

    We perform a nonperturbative sum over geometries in a (2+1)-dimensional quantum gravity model given in terms of causal dynamical triangulations. Inspired by the concept of triangulations of product type introduced previously, we impose an additional notion of order on the discrete, causal geometries. This simplifies the combinatorial problem of counting geometries just enough to enable us to calculate the transfer matrix between boundary states labeled by the area of the spatial universe, as well as the corresponding quantum Hamiltonian of the continuum theory. This is the first time in dimension larger than 2 that a Hamiltonian has been derived from such a model by mainly analytical means, and it opens the way for a better understanding of scaling and renormalization issues

  6. Depth measurements of drilled holes in bone by laser triangulation for the field of oral implantology

    Science.gov (United States)

    Quest, D.; Gayer, C.; Hering, P.

    2012-01-01

    Laser osteotomy is one possible method of preparing beds for dental implants in the human jaw. A major problem in using this contactless treatment modality is the lack of haptic feedback to control the depth while drilling the implant bed. A contactless measurement system called laser triangulation is presented as a new procedure to overcome this problem. Together with a tomographic picture the actual position of the laser ablation in the bone can be calculated. Furthermore, the laser response is sufficiently fast as to pose little risk to surrounding sensitive areas such as nerves and blood vessels. In the jaw two different bone structures exist, namely the cancellous bone and the compact bone. Samples of both bone structures were examined with test drillings performed either by laser osteotomy or by a conventional rotating drilling tool. The depth of these holes was measured using laser triangulation. The results and the setup are reported in this study.

  7. GEOPOSITIONING PRECISION ANALYSIS OF MULTIPLE IMAGE TRIANGULATION USING LRO NAC LUNAR IMAGES

    Directory of Open Access Journals (Sweden)

    K. Di

    2016-06-01

    Full Text Available This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC Narrow Angle Camera (NAC images at the Chang’e-3(CE-3 landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  8. Public health triangulation: approach and application to synthesizing data to understand national and local HIV epidemics

    Directory of Open Access Journals (Sweden)

    Aberle-Grasse John

    2010-07-01

    Full Text Available Abstract Background Public health triangulation is a process for reviewing, synthesising and interpreting secondary data from multiple sources that bear on the same question to make public health decisions. It can be used to understand the dynamics of HIV transmission and to measure the impact of public health programs. While traditional intervention research and metaanalysis would be ideal sources of information for public health decision making, they are infrequently available, and often decisions can be based only on surveillance and survey data. Methods The process involves examination of a wide variety of data sources and both biological, behavioral and program data and seeks input from stakeholders to formulate meaningful public health questions. Finally and most importantly, it uses the results to inform public health decision-making. There are 12 discrete steps in the triangulation process, which included identification and assessment of key questions, identification of data sources, refining questions, gathering data and reports, assessing the quality of those data and reports, formulating hypotheses to explain trends in the data, corroborating or refining working hypotheses, drawing conclusions, communicating results and recommendations and taking public health action. Results Triangulation can be limited by the quality of the original data, the potentials for ecological fallacy and "data dredging" and reproducibility of results. Conclusions Nonetheless, we believe that public health triangulation allows for the interpretation of data sets that cannot be analyzed using meta-analysis and can be a helpful adjunct to surveillance, to formal public health intervention research and to monitoring and evaluation, which in turn lead to improved national strategic planning and resource allocation.

  9. Spectral triangulation molecular contrast optical coherence tomography with indocyanine green as the contrast agent

    OpenAIRE

    Yang, Changhuei; McGuckin, Laura E. L.; Simon, John D.; Choma, Michael A.; Applegate, Brian E.; Izatt, Joseph A.

    2004-01-01

    We report a new molecular contrast optical coherence tomography (MCOCT) implementation that profiles the contrast agent distribution in a sample by measuring the agent's spectral differential absorption. The method, spectra triangulation MCOCT, can effectively suppress contributions from spectrally dependent scatterings from the sample without a priori knowledge of the scattering properties. We demonstrate molecular imaging with this new MCOCT modality by mapping the distribution of indocyani...

  10. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  11. Shared decision-making in medical encounters regarding breast cancer treatment: the contribution of methodological triangulation.

    Science.gov (United States)

    Durif-Bruckert, C; Roux, P; Morelle, M; Mignotte, H; Faure, C; Moumjid-Ferdjaoui, N

    2015-07-01

    The aim of this study on shared decision-making in the doctor-patient encounter about surgical treatment for early-stage breast cancer, conducted in a regional cancer centre in France, was to further the understanding of patient perceptions on shared decision-making. The study used methodological triangulation to collect data (both quantitative and qualitative) about patient preferences in the context of a clinical consultation in which surgeons followed a shared decision-making protocol. Data were analysed from a multi-disciplinary research perspective (social psychology and health economics). The triangulated data collection methods were questionnaires (n = 132), longitudinal interviews (n = 47) and observations of consultations (n = 26). Methodological triangulation revealed levels of divergence and complementarity between qualitative and quantitative results that suggest new perspectives on the three inter-related notions of decision-making, participation and information. Patients' responses revealed important differences between shared decision-making and participation per se. The authors note that subjecting patients to a normative behavioural model of shared decision-making in an era when paradigms of medical authority are shifting may undermine the patient's quest for what he or she believes is a more important right: a guarantee of the best care available. © 2014 John Wiley & Sons Ltd.

  12. Triangulation of written assessments from patients, teachers and students: useful for students and teachers?

    Science.gov (United States)

    Gran, Sarah Frandsen; Braend, Anja Maria; Lindbaek, Morten

    2010-01-01

    Many medical students in general practice clerkships experience lack of observation-based feedback. The StudentPEP project combined written feedback from patients, observing teachers and students. This study analyzes the perceived usefulness of triangulated written feedback. A total of 71 general practitioners and 79 medical students at the University of Oslo completed project evaluation forms after a 6-week clerkship. A principal component analysis was performed to find structures within the questionnaire. Regression analysis was performed regarding students' answers to whether StudentPEP was worthwhile. Free-text answers were analyzed qualitatively. Student and teacher responses were mixed within six subscales, with highest agreement on 'Teachers oral and written feedback' and 'Attitude to patient evaluation'. Fifty-four per cent of the students agreed that the triangulation gave concrete feedback on their weaknesses, and 59% valued the teachers' feedback provided. Two statements regarding the teacher's attitudes towards StudentPEP were significantly associated with the student's perception of worthwhileness. Qualitative analysis showed that patient evaluations were encouraging or distrusted. Some students thought that StudentPEP ensured observation and feedback. The patient evaluations increased the students' awareness of the patient perspective. A majority of the students considered the triangulated written feedback beneficial, although time-consuming. The teacher's attitudes strongly influenced how the students perceived the usefulness of StudentPEP.

  13. Chromatic polynomials of planar triangulations, the Tutte upper bound and chromatic zeros

    International Nuclear Information System (INIS)

    Shrock, Robert; Xu Yan

    2012-01-01

    Tutte proved that if G pt is a planar triangulation and P(G pt , q) is its chromatic polynomial, then |P(G pt , τ + 1)| ⩽ (τ − 1) n−5 , where τ=(1+√5 )/2 and n is the number of vertices in G pt . Here we study the ratio r(G pt ) = |P(G pt , τ + 1)|/(τ − 1) n−5 for a variety of planar triangulations. We construct infinite recursive families of planar triangulations G pt,m depending on a parameter m linearly related to n and show that if P(G pt,m , q) only involves a single power of a polynomial, then r(G pt,m ) approaches zero exponentially fast as n → ∞. We also construct infinite recursive families for which P(G pt,m , q) is a sum of powers of certain functions and show that for these, r(G pt,m ) may approach a finite nonzero constant as n → ∞. The connection between the Tutte upper bound and the observed chromatic zero(s) near to τ + 1 is investigated. We report the first known graph for which the zero(s) closest to τ + 1 is not real, but instead is a complex-conjugate pair. Finally, we discuss connections with the nonzero ground-state entropy of the Potts antiferromagnet on these families of graphs. (paper)

  14. Interprofessional collaboration from nurses and physicians – A triangulation of quantitative and qualitative data

    Science.gov (United States)

    Schärli, Marianne; Müller, Rita; Martin, Jacqueline S; Spichiger, Elisabeth; Spirig, Rebecca

    2017-01-01

    Background: Interprofessional collaboration between nurses and physicians is a recurrent challenge in daily clinical practice. To ameliorate the situation, quantitative or qualitative studies are conducted. However, the results of these studies have often been limited by the methods chosen. Aim: To describe the synthesis of interprofessional collaboration from the nursing perspective by triangulating quantitative and qualitative data. Method: Data triangulation was performed as a sub-project of the interprofessional Sinergia DRG Research program. Initially, quantitative and qualitative data were analyzed separately in a mixed methods design. By means of triangulation a „meta-matrix“ resulted in a four-step process. Results: The „meta-matrix“ displays all relevant quantitative and qualitative results as well as their interrelations on one page. Relevance, influencing factors as well as consequences of interprofessional collaboration for patients, relatives and systems become visible. Conclusion: For the first time, the interprofessional collaboration from the nursing perspective at five Swiss hospitals is shown in a „meta-matrix“. The consequences of insufficient collaboration between nurses and physicians are considerable. This is why it’s necessary to invest in interprofessional concepts. In the „meta-matrix“ the factors which influence the interprofessional collaboration positively or negatively are visible.

  15. Large N Limits in Tensor Models: Towards More Universality Classes of Colored Triangulations in Dimension d≥2

    Science.gov (United States)

    Bonzom, Valentin

    2016-07-01

    We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum

  16. Non-degenerated Ground States and Low-degenerated Excited States in the Antiferromagnetic Ising Model on Triangulations

    Science.gov (United States)

    Jiménez, Andrea

    2014-02-01

    We study the unexpected asymptotic behavior of the degeneracy of the first few energy levels in the antiferromagnetic Ising model on triangulations of closed Riemann surfaces. There are strong mathematical and physical reasons to expect that the number of ground states (i.e., degeneracy) of the antiferromagnetic Ising model on the triangulations of a fixed closed Riemann surface is exponential in the number of vertices. In the set of plane triangulations, the degeneracy equals the number of perfect matchings of the geometric duals, and thus it is exponential by a recent result of Chudnovsky and Seymour. From the physics point of view, antiferromagnetic triangulations are geometrically frustrated systems, and in such systems exponential degeneracy is predicted. We present results that contradict these predictions. We prove that for each closed Riemann surface S of positive genus, there are sequences of triangulations of S with exactly one ground state. One possible explanation of this phenomenon is that exponential degeneracy would be found in the excited states with energy close to the ground state energy. However, as our second result, we show the existence of a sequence of triangulations of a closed Riemann surface of genus 10 with exactly one ground state such that the degeneracy of each of the 1st, 2nd, 3rd and 4th excited energy levels belongs to O( n), O( n 2), O( n 3) and O( n 4), respectively.

  17. The relationships between stressful life events during childhood and differentiation of self and intergenerational triangulation in adulthood.

    Science.gov (United States)

    Peleg, Ora

    2014-12-01

    This study examined the relationships between stressful life events in childhood and differentiation of self and intergenerational triangulation in adulthood. The sample included 217 students (173 females and 44 males) from a college in northern Israel. Participants completed the Hebrew versions of Life Events Checklist (LEC), Differentiation of Self Inventory-Revised (DSI-R) and intergenerational triangulation (INTRI). The main findings were that levels of stressful life events during childhood and adolescence among both genders were positively correlated with the levels of fusion with others and intergenerational triangulation. The levels of positive life events were negatively related to levels of emotional reactivity, emotional cut-off and intergenerational triangulation. Levels of stressful life events in females were positively correlated with emotional reactivity. Intergenerational triangulation was correlated with emotional reactivity, emotional cut-off, fusion with others and I-position. Findings suggest that families that experience higher levels of stressful life events may be at risk for higher levels of intergenerational triangulation and lower levels of differentiation of self. © 2014 International Union of Psychological Science.

  18. Measuring teamwork in primary care: Triangulation of qualitative and quantitative data.

    Science.gov (United States)

    Brown, Judith Belle; Ryan, Bridget L; Thorpe, Cathy; Markle, Emma K R; Hutchison, Brian; Glazier, Richard H

    2015-09-01

    This article describes the triangulation of qualitative dimensions, reflecting high functioning teams, with the results of standardized teamwork measures. The study used a mixed methods design using qualitative and quantitative approaches to assess teamwork in 19 Family Health Teams in Ontario, Canada. This article describes dimensions from the qualitative phase using grounded theory to explore the issues and challenges to teamwork. Two quantitative measures were used in the study, the Team Climate Inventory (TCI) and the Providing Effective Resources and Knowledge (PERK) scale. For the triangulation analysis, the mean scores of these measures were compared with the qualitatively derived ratings for the dimensions. The final sample for the qualitative component was 107 participants. The qualitative analysis identified 9 dimensions related to high team functioning such as common philosophy, scope of practice, conflict resolution, change management, leadership, and team evolution. From these dimensions, teams were categorized numerically as high, moderate, or low functioning. Three hundred seventeen team members completed the survey measures. Mean site scores for the TCI and PERK were 3.87 and 3.88, respectively (of 5). The TCI was associated will all dimensions except for team location, space allocation, and executive director leadership. The PERK was associated with all dimensions except team location. Data triangulation provided qualitative and quantitative evidence of what constitutes teamwork. Leadership was pivotal in forging a common philosophy and encouraging team collaboration. Teams used conflict resolution strategies and adapted to the changes they encountered. These dimensions advanced the team's evolution toward a high functioning team. (c) 2015 APA, all rights reserved).

  19. Lymphoscintigraphy and triangulated body marking for morbidity reduction during sentinel node biopsy in breast cancer.

    Science.gov (United States)

    Krynyckyi, Borys R; Shafir, Michail K; Kim, Suk Chul; Kim, Dong Wook; Travis, Arlene; Moadel, Renee M; Kim, Chun K

    2005-11-08

    Current trends in patient care include the desire for minimizing invasiveness of procedures and interventions. This aim is reflected in the increasing utilization of sentinel lymph node biopsy, which results in a lower level of morbidity in breast cancer staging, in comparison to extensive conventional axillary dissection. Optimized lymphoscintigraphy with triangulated body marking is a clinical option that can further reduce morbidity, more than when a hand held gamma probe alone is utilized. Unfortunately it is often either overlooked or not fully understood, and thus not utilized. This results in the unnecessary loss of an opportunity to further reduce morbidity. Optimized lymphoscintigraphy and triangulated body marking provides a detailed 3 dimensional map of the number and location of the sentinel nodes, available before the first incision is made. The number, location, relevance based on time/sequence of appearance of the nodes, all can influence 1) where the incision is made, 2) how extensive the dissection is, and 3) how many nodes are removed. In addition, complex patterns can arise from injections. These include prominent lymphatic channels, pseudo-sentinel nodes, echelon and reverse echelon nodes and even contamination, which are much more difficult to access with the probe only. With the detailed information provided by optimized lymphoscintigraphy and triangulated body marking, the surgeon can approach the axilla in a more enlightened fashion, in contrast to when the less informed probe only method is used. This allows for better planning, resulting in the best cosmetic effect and less trauma to the tissues, further reducing morbidity while maintaining adequate sampling of the sentinel node(s).

  20. Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations

    Science.gov (United States)

    Cooperman, Joshua H.

    2018-05-01

    The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral

  1. Modification of the laser triangulation method for measuring the thickness of optical layers

    Science.gov (United States)

    Khramov, V. N.; Adamov, A. A.

    2018-04-01

    The problem of determining the thickness of thin films by the method of laser triangulation is considered. An expression is derived for the film thickness and the distance between the focused beams on the photo detector. The possibility of applying the chosen method for measuring thickness is in the range [0.1; 1] mm. We could resolve 2 individual light marks for a minimum film thickness of 0.23 mm. We resolved with the help of computer processing of photos with a resolution of 0.10 mm. The obtained results can be used in ophthalmology for express diagnostics during surgical operations on the corneal layer.

  2. 1:500 Scale Aerial Triangulation Test with Unmanned Airship in Hubei Province

    International Nuclear Information System (INIS)

    Feifei, Xie; Zongjian, Lin; Dezhu, Gui

    2014-01-01

    A new UAVS (Unmanned Aerial Vehicle System) for low altitude aerial photogrammetry is introduced for fine surveying and mapping, including the platform airship, sensor system four-combined wide-angle camera and photogrammetry software MAP-AT. It is demonstrated that this low-altitude aerial photogrammetric system meets the precision requirements of 1:500 scale aerial triangulation based on the test of this system in Hubei province, including the working condition of the airship, the quality of image data and the data processing report. This work provides a possibility for fine surveying and mapping

  3. First Instances of Generalized Expo-Rational Finite Elements on Triangulations

    Science.gov (United States)

    Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre

    2011-12-01

    In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.

  4. Summing Feynman graphs by Monte Carlo: Planar φ3-theory and dynamically triangulated random surfaces

    International Nuclear Information System (INIS)

    Boulatov, D.V.

    1988-01-01

    New combinatorial identities are suggested relating the ratio of (n-1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γ str (string susceptibility) in planar φ 3 -theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D=1 the exact critical properties of the theory are reproduced numerically. (orig.)

  5. Development of the delyed-neutron triangulation technique for locating failed fuel in LMFBR

    International Nuclear Information System (INIS)

    Kryter, R.C.

    1975-01-01

    Two major accomplishments of the ORNL delayed neutron triangulation program are (1) an analysis of anticipated detector counting rates and sensitivities to unclad fuel and erosion types of pin failure, and (2) an experimental assessment of the accuracy with which the position of failed fuel can be determined in the FFTF (this was performed in a quarter-scale water mockup of realistic outlet plenum geometry using electrolyte injections and conductivity cells to simulate delayed-neutron precursor releases and detections, respectively). The major results and conclusions from these studies are presented, along with plans for further DNT development work at ORNL for the FFTF and CRBR. (author)

  6. Triangulating laser profilometer as a navigational aid for the blind: optical aspects

    Science.gov (United States)

    Farcy, R.; Denise, B.; Damaschini, R.

    1996-03-01

    We propose a navigational aid approach for the blind that relies on active optical profilometry with real-time electrotactile interfacing on the skin. Here we are concerned with the optical parts of this system. We point out the particular requirements the profilometer must meet to meet the needs of blind people. We show experimentally that an adequate compromise is possible that consists of a compact class I IR laser-diode triangulation profilometer with the following angular resolution, 20-ms acquisition time per measure of distance, 60 degrees angular scanning field.

  7. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  8. Indirect measurement of molten steel level in tundish based on laser triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Su, Zhiqi; He, Qing, E-mail: heqing@ise.neu.edu.cn; Xie, Zhi [State Key Laboratory of Synthetical Automation for Process Industries, School of Information Science and Engineering, Northeastern University, Shenyang 110819 (China)

    2016-03-15

    For real-time and precise measurement of molten steel level in tundish during continuous casting, slag level and slag thickness are needed. Among which, the problem of slag thickness measurement has been solved in our previous work. In this paper, a systematic solution for slag level measurement based on laser triangulation is proposed. Being different from traditional laser triangulation, several aspects for measuring precision and robustness have been done. First, laser line is adopted for multi-position measurement to overcome the deficiency of single point laser range finder caused by the uneven surface of the slag. Second, the key parameters, such as installing angle and minimum requirement of the laser power, are analyzed and determined based on the gray-body radiation theory to fulfill the rigorous requirement of measurement accuracy. Third, two kinds of severe noises in the acquired images, which are, respectively, caused by heat radiation and Electro-Magnetic Interference (EMI), are cleaned via morphological characteristic of the liquid slag and color difference between EMI and the laser signals, respectively. Fourth, as false target created by stationary slag usually disorders the measurement, valid signals of the slag are distinguished from the false ones to calculate the slag level. Then, molten steel level is obtained by the slag level minus the slag thickness. The measuring error of this solution is verified by the applications in steel plants, which is ±2.5 mm during steady casting and ±3.2 mm at the end of casting.

  9. Indirect measurement of molten steel level in tundish based on laser triangulation

    Science.gov (United States)

    Su, Zhiqi; He, Qing; Xie, Zhi

    2016-03-01

    For real-time and precise measurement of molten steel level in tundish during continuous casting, slag level and slag thickness are needed. Among which, the problem of slag thickness measurement has been solved in our previous work. In this paper, a systematic solution for slag level measurement based on laser triangulation is proposed. Being different from traditional laser triangulation, several aspects for measuring precision and robustness have been done. First, laser line is adopted for multi-position measurement to overcome the deficiency of single point laser range finder caused by the uneven surface of the slag. Second, the key parameters, such as installing angle and minimum requirement of the laser power, are analyzed and determined based on the gray-body radiation theory to fulfill the rigorous requirement of measurement accuracy. Third, two kinds of severe noises in the acquired images, which are, respectively, caused by heat radiation and Electro-Magnetic Interference (EMI), are cleaned via morphological characteristic of the liquid slag and color difference between EMI and the laser signals, respectively. Fourth, as false target created by stationary slag usually disorders the measurement, valid signals of the slag are distinguished from the false ones to calculate the slag level. Then, molten steel level is obtained by the slag level minus the slag thickness. The measuring error of this solution is verified by the applications in steel plants, which is ±2.5 mm during steady casting and ±3.2 mm at the end of casting.

  10. The Family System and Depressive Symptoms during the College Years: Triangulation, Parental Differential Treatment, and Sibling Warmth as Predictors.

    Science.gov (United States)

    Ponappa, Sujata; Bartle-Haring, Suzanne; Holowacz, Eugene; Ferriby, Megan

    2017-01-01

    Guided by Bowen theory, we investigated the relationships between parent-child triangulation, parental differential treatment (PDT), sibling warmth, and individual depressive symptoms in a sample of 77 sibling dyads, aged 18-25 years, recruited through undergraduate classes at a U.S. public University. Results of the actor-partner interdependence models suggested that being triangulated into parental conflict was positively related to both siblings' perception of PDT; however, as one sibling felt triangulated, the other perceived reduced levels of PDT. For both siblings, the perception of higher levels of PDT was related to decreased sibling warmth and higher sibling warmth was associated with fewer depressive symptoms. The implications of these findings for research and the treatment of depression in the college-aged population are discussed. © 2016 American Association for Marriage and Family Therapy.

  11. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    Science.gov (United States)

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For

  12. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  13. Detection of Water Contamination Events Using Fluorescence Spectroscopy and Alternating Trilinear Decomposition Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2017-01-01

    Full Text Available The method based on conventional index and UV-vision has been widely applied in the field of water quality abnormality detection. This paper presents a qualitative analysis approach to detect the water contamination events with unknown pollutants. Fluorescence spectra were used as water quality monitoring tools, and the detection method of unknown contaminants in water based on alternating trilinear decomposition (ATLD is proposed to analyze the excitation and emission spectra of the samples. The Delaunay triangulation interpolation method was used to make the pretreatment of three-dimensional fluorescence spectra data, in order to estimate the effect of Rayleigh and Raman scattering; ATLD model was applied to establish the model of normal water sample, and the residual matrix was obtained by subtracting the measured matrix from the model matrix; the residual sum of squares obtained from the residual matrix and threshold was used to make qualitative discrimination of test samples and distinguish drinking water samples and organic pollutant samples. The results of the study indicate that ATLD modeling with three-dimensional fluorescence spectra can provide a tool for detecting unknown organic pollutants in water qualitatively. The method based on fluorescence spectra can be complementary to the method based on conventional index and UV-vision.

  14. Two Strategies for Qualitative Content Analysis: An Intramethod Approach to Triangulation.

    Science.gov (United States)

    Renz, Susan M; Carrington, Jane M; Badger, Terry A

    2018-04-01

    The overarching aim of qualitative research is to gain an understanding of certain social phenomena. Qualitative research involves the studied use and collection of empirical materials, all to describe moments and meanings in individuals' lives. Data derived from these various materials require a form of analysis of the content, focusing on written or spoken language as communication, to provide context and understanding of the message. Qualitative research often involves the collection of data through extensive interviews, note taking, and tape recording. These methods are time- and labor-intensive. With the advances in computerized text analysis software, the practice of combining methods to analyze qualitative data can assist the researcher in making large data sets more manageable and enhance the trustworthiness of the results. This article will describe a novel process of combining two methods of qualitative data analysis, or Intramethod triangulation, as a means to provide a deeper analysis of text.

  15. Triangulation and Gender Perspectives in ‘Falling Man’ by Don DeLillo

    Directory of Open Access Journals (Sweden)

    Noemi Abe

    2011-09-01

    Susannah Radstone argues that the rhetorical response to 9/11 by the Bush administration is based on the opposition of two father figures: “the 'chastened' but powerful 'good' patriarchal father” Vs. “the 'bad' archaic father”. She explains: “In this Manichean fantasy can be glimpsed the continuing battle between competing versions of masculinity” (2002:459 that leaves women on the margins. The battle of the fathers of Bush’s rhetoric is counterposed in Falling Man by a battle between two men that stands for an unaccomplished fatherhood. Furthermore, the dualistic vision engendered by post-9/11 rhetoric and reflected in the novel should be evaluated in a trilateral dimension, given that at its core lies a triangulation built upon three stereotypical representations: the white middle-class man; the Arab terrorist; and a composite character in the middle, the woman, who shifts from ally, to victim, to a plausible supporter of the enemy.

  16. Exploring Forms of Triangulation to Facilitate Collaborative Research Practice: Reflections From a Multidisciplinary Research Group

    Directory of Open Access Journals (Sweden)

    Tarja Tiainen

    2006-10-01

    Full Text Available This article contains critical reflections of a multidisciplinary research group studying the human and technological dynamics around some newly offered electronic services in a specific rural area of Finland. For their research, the group adopted ethnography. On facing the challenges of doing ethnographic research in a multidisciplinary setting, the group evolved its own breed of research practice based on multiple forms of triangulation. This implied the use of multiple data sources, methods, theories, and researchers, in different combinations. One of the outcomes of the work is a model for collaborative research. It highlights, among others, the importance of creating a climate for collaboration within the research group and following a process of individual and collaborative writing to achieve the potential benefits of such research. The article also identifies a set of remaining challenges relevant to collaborative research.

  17. Zur Rekonstruktion einer Typologie jugendlichen Medienhandelns gemäß dem Leitbild der Triangulation

    Directory of Open Access Journals (Sweden)

    Klaus Peter Treumann

    2017-09-01

    Full Text Available Die im Folgenden dargestellten Ergebnisse sind im Rahmen des von der DFG geförderten Forschungsprojekts „Eine Untersuchung zum Mediennutzungsverhalten 12- bis 20-Jähriger und zur Entwicklung von Medienkompetenz im Jugendalter“ entstanden, das gemeinsam von Klaus Peter Treumann, Uwe Sander und Dorothee Meister geleitet wird. Das Forschungsprojekt untersucht das Medienhandeln Jugendlicher sowohl hinsichtlich Neuer als auch alter Medien. Zum einen fragen wir dabei nach den Ausprägungen von Medienkompetenz in verschiedenen Dimensionen und zum anderen konzentrieren wir uns auf die Entwicklung einer empirisch fundierten Typologie jugendlichen Medienhandelns. Methodologisch ist die Untersuchung an dem Leitbild der Triangulation orientiert und kombiniert qualitative und quantitative Zugänge zum Forschungsfeld in Form von Gruppendiskussionen, leitfadengestützten Einzelinterviews und einer Repräsentativerhebung.

  18. Thermal Entanglement and Critical Behavior of Magnetic Properties on a Triangulated Kagomé Lattice

    Directory of Open Access Journals (Sweden)

    N. Ananikian

    2011-01-01

    Full Text Available The equilibrium magnetic and entanglement properties in a spin-1/2 Ising-Heisenberg model on a triangulated Kagomé lattice are analyzed by means of the effective field for the Gibbs-Bogoliubov inequality. The calculation is reduced to decoupled individual (clusters trimers due to the separable character of the Ising-type exchange interactions between the Heisenberg trimers. The concurrence in terms of the three qubit isotropic Heisenberg model in the effective Ising field in the absence of a magnetic field is non-zero. The magnetic and entanglement properties exhibit common (plateau, peak features driven by a magnetic field and (antiferromagnetic exchange interaction. The (quantum entangled and non-entangled phases can be exploited as a useful tool for signalling the quantum phase transitions and crossovers at finite temperatures. The critical temperature of order-disorder coincides with the threshold temperature of thermal entanglement.

  19. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    Science.gov (United States)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  20. Fixed-topology Lorentzian triangulations: Quantum Regge Calculus in the Lorentzian domain

    Science.gov (United States)

    Tate, Kyle; Visser, Matt

    2011-11-01

    A key insight used in developing the theory of Causal Dynamical Triangu-lations (CDTs) is to use the causal (or light-cone) structure of Lorentzian manifolds to restrict the class of geometries appearing in the Quantum Gravity (QG) path integral. By exploiting this structure the models developed in CDTs differ from the analogous models developed in the Euclidean domain, models of (Euclidean) Dynamical Triangulations (DT), and the corresponding Lorentzian results are in many ways more "physical". In this paper we use this insight to formulate a Lorentzian signature model that is anal-ogous to the Quantum Regge Calculus (QRC) approach to Euclidean Quantum Gravity. We exploit another crucial fact about the structure of Lorentzian manifolds, namely that certain simplices are not constrained by the triangle inequalities present in Euclidean signa-ture. We show that this model is not related to QRC by a naive Wick rotation; this serves as another demonstration that the sum over Lorentzian geometries is not simply related to the sum over Euclidean geometries. By removing the triangle inequality constraints, there is more freedom to perform analytical calculations, and in addition numerical simulations are more computationally efficient. We first formulate the model in 1 + 1 dimensions, and derive scaling relations for the pure gravity path integral on the torus using two different measures. It appears relatively easy to generate "large" universes, both in spatial and temporal extent. In addition, loopto-loop amplitudes are discussed, and a transfer matrix is derived. We then also discuss the model in higher dimensions.

  1. A simplicial algorithm for testing the integral properties of polytopes : A revision

    NARCIS (Netherlands)

    Yang, Z.F.

    1994-01-01

    Given an arbitrary polytope P in the n-dimensional Euclidean space R n , the question is to determine whether P contains an integral point or not. We propose a simplicial algorithm to answer this question based on a specifc integer labeling rule and a specific triangulation of R n . Starting from an

  2. Multi-region unstructured volume segmentation using tetrahedron filling

    Energy Technology Data Exchange (ETDEWEB)

    Willliams, Sean Jamerson [Los Alamos National Laboratory; Dillard, Scott E [Los Alamos National Laboratory; Thoma, Dan J [MDI, INSTITUTES; Hlawitschka, Mario [UC DAVIS; Hamann, Bernd [UC DAVIS

    2010-01-01

    Segmentation is one of the most common operations in image processing, and while there are several solutions already present in the literature, they each have their own benefits and drawbacks that make them well-suited for some types of data and not for others. We focus on the problem of breaking an image into multiple regions in a single segmentation pass, while supporting both voxel and scattered point data. To solve this problem, we begin with a set of potential boundary points and use a Delaunay triangulation to complete the boundaries. We use heuristic- and interaction-driven Voronoi clustering to find reasonable groupings of tetrahedra. Apart from the computation of the Delaunay triangulation, our algorithm has linear time complexity with respect to the number of tetrahedra.

  3. Spectral triangulation: a 3D method for locating single-walled carbon nanotubes in vivo

    Science.gov (United States)

    Lin, Ching-Wei; Bachilo, Sergei M.; Vu, Michael; Beckingham, Kathleen M.; Bruce Weisman, R.

    2016-05-01

    Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions.Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and

  4. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  5. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders.

    Science.gov (United States)

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Bloch Modes and Evanescent Modes of Photonic Crystals: Weak Form Solutions Based on Accurate Interface Triangulation

    Directory of Open Access Journals (Sweden)

    Matthias Saba

    2015-01-01

    Full Text Available We propose a new approach to calculate the complex photonic band structure, both purely dispersive and evanescent Bloch modes of a finite range, of arbitrary three-dimensional photonic crystals. Our method, based on a well-established plane wave expansion and the weak form solution of Maxwell’s equations, computes the Fourier components of periodic structures composed of distinct homogeneous material domains from a triangulated mesh representation of the inter-material interfaces; this allows substantially more accurate representations of the geometry of complex photonic crystals than the conventional representation by a cubic voxel grid. Our method works for general two-phase composite materials, consisting of bi-anisotropic materials with tensor-valued dielectric and magnetic permittivities ε and μ and coupling matrices ς. We demonstrate for the Bragg mirror and a simple cubic crystal closely related to the Kelvin foam that relatively small numbers of Fourier components are sufficient to yield good convergence of the eigenvalues, making this method viable, despite its computational complexity. As an application, we use the single gyroid crystal to demonstrate that the consideration of both conventional and evanescent Bloch modes is necessary to predict the key features of the reflectance spectrum by analysis of the band structure, in particular for light incident along the cubic [111] direction.

  8. Solving the Einstein constraint equations on multi-block triangulations using finite element methods

    Energy Technology Data Exchange (ETDEWEB)

    Korobkin, Oleg; Pazos, Enrique [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 (United States); Aksoylu, Burak [Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803 (United States); Holst, Michael [Department of Mathematics, University of California at San Diego 9500 Gilman Drive La Jolla, CA 92093-0112 (United States); Tiglio, Manuel [Department of Physics, University of Maryland, College Park, MD 20742 (United States)

    2009-07-21

    In order to generate initial data for nonlinear relativistic simulations, one needs to solve the Einstein constraints, which can be cast into a coupled set of nonlinear elliptic equations. Here we present an approach for solving these equations on three-dimensional multi-block domains using finite element methods. We illustrate our approach on a simple example of Brill wave initial data, with the constraints reducing to a single linear elliptic equation for the conformal factor psi. We use quadratic Lagrange elements on semi-structured simplicial meshes, obtained by triangulation of multi-block grids. In the case of uniform refinement the scheme is superconvergent at most mesh vertices, due to local symmetry of the finite element basis with respect to local spatial inversions. We show that in the superconvergent case subsequent unstructured mesh refinements do not improve the quality of our initial data. As proof of concept that this approach is feasible for generating multi-block initial data in three dimensions, after constructing the initial data we evolve them in time using a high-order finite-differencing multi-block approach and extract the gravitational waves from the numerical solution.

  9. Solving the Einstein constraint equations on multi-block triangulations using finite element methods

    International Nuclear Information System (INIS)

    Korobkin, Oleg; Pazos, Enrique; Aksoylu, Burak; Holst, Michael; Tiglio, Manuel

    2009-01-01

    In order to generate initial data for nonlinear relativistic simulations, one needs to solve the Einstein constraints, which can be cast into a coupled set of nonlinear elliptic equations. Here we present an approach for solving these equations on three-dimensional multi-block domains using finite element methods. We illustrate our approach on a simple example of Brill wave initial data, with the constraints reducing to a single linear elliptic equation for the conformal factor ψ. We use quadratic Lagrange elements on semi-structured simplicial meshes, obtained by triangulation of multi-block grids. In the case of uniform refinement the scheme is superconvergent at most mesh vertices, due to local symmetry of the finite element basis with respect to local spatial inversions. We show that in the superconvergent case subsequent unstructured mesh refinements do not improve the quality of our initial data. As proof of concept that this approach is feasible for generating multi-block initial data in three dimensions, after constructing the initial data we evolve them in time using a high-order finite-differencing multi-block approach and extract the gravitational waves from the numerical solution.

  10. Simulations of four-dimensional simplicial quantum gravity as dynamical triangulation

    International Nuclear Information System (INIS)

    Agishtein, M.E.; Migdal, A.A.

    1992-01-01

    In this paper, Four-Dimensional Simplicial Quantum Gravity is simulated using the dynamical triangulation approach. The authors studied simplicial manifolds of spherical topology and found the critical line for the cosmological constant as a function of the gravitational one, separating the phases of opened and closed Universe. When the bare cosmological constant approaches this line from above, the four-volume grows: the authors reached about 5 x 10 4 simplexes, which proved to be sufficient for the statistical limit of infinite volume. However, for the genuine continuum theory of gravity, the parameters of the lattice model should be further adjusted to reach the second order phase transition point, where the correlation length grows to infinity. The authors varied the gravitational constant, and they found the first order phase transition, similar to the one found in three-dimensional model, except in 4D the fluctuations are rather large at the transition point, so that this is close to the second order phase transition. The average curvature in cutoff units is large and positive in one phase (gravity), and small negative in another (antigravity). The authors studied the fractal geometry of both phases, using the heavy particle propagator to define the geodesic map, as well as with the old approach using the shortest lattice paths

  11. Triangulation of Qualitative Methods for the Exploration of Activity Systems in Ergonomics

    Directory of Open Access Journals (Sweden)

    Monika Hackel

    2008-08-01

    Full Text Available Research concerning ergonomic issues in interdisciplinary projects often raises several very specific questions depending on project objectives. To answer these questions the application of research methods should be thoroughly considered, regarding both the expenditure and the options within the scope of the given resources. The project AQUIMO develops an adaptable modelling tool for mechatronical engineering and creates a related qualification program. The task of social scientific research within this project is to identify requirements viewed from the perspective of the subsequent users. This formative evaluation is based on the approach of "developmental work research" as set forth by ENGESTRÖM and, thus, is a form of "action research". This paper discusses the triangulation of several qualitative methods addressing the examination of difficulties in interdisciplinary collaboration in mechatronical engineering. After a description of both background and analytic approach within the project AQUIMO, the methods are briefly described concerning their advantages and critical points. Their application within the research project AQUIMO is explained from an activity theoretical perspective. URN: urn:nbn:de:0114-fqs0803158

  12. Barriers to energy efficiency in shipping: A triangulated approach to investigate the principal agent problem

    International Nuclear Information System (INIS)

    Rehmatulla, Nishatabbas; Smith, Tristan

    2015-01-01

    Energy efficiency is a key policy strategy to meet some of the challenges being faced today and to plan for a sustainable future. Numerous empirical studies in various sectors suggest that there are cost-effective measures that are available but not always implemented due to existence of barriers to energy efficiency. Several cost-effective energy efficient options (technologies for new and existing ships and operations) have also been identified for improving energy efficiency of ships. This paper is one of the first to empirically investigate barriers to energy efficiency in the shipping industry using a novel framework and multidisciplinary methods to gauge implementation of cost-effective measures, perception on barriers and observations of barriers. It draws on findings of a survey conducted of shipping companies, content analysis of shipping contracts and analysis of energy efficiency data. Initial results from these methods suggest the existence of the principal agent problem and other market failures and barriers that have also been suggested in other sectors and industries. Given this finding, policies to improve implementation of energy efficiency in shipping need to be carefully considered to improve their efficacy and avoid unintended consequences. -- Highlights: •We provide the first analysis of the principal agent problem in shipping. •We develop a framework that incorporates methodological triangulation. •Our results show the extent to which this barrier is observed and perceived. •The presence of the barrier has implications on the policy most suited to shipping

  13. Restrictions on Measurement of Roughness of Textile Fabrics by Laser Triangulation: A Phenomenological Approach

    International Nuclear Information System (INIS)

    Berberi, Pellumb; Tabaku, Burhan

    2010-01-01

    Laser triangulation method is one of the methods used for contactless measurement of roughness of textile fabrics. Method is based on measurement of distance between the sensor and the object by imaging the light scattered from the surface. However, experimental results, especially for high values of roughness, show a strong dependence to duration of exposure time to laser pulses. Use of very short exposure times and long exposures times causes appearance on the surface of the scanned textile of pixels with Active peak heights. The number of Active peaks increases with decrease of exposure time down to 0.1 ms, and increases with increase of exposure time up to 100 ms. Appearance of Active peaks leads to nonrealistic increase of roughness of the surface both for short exposure times and long exposure times reaching a minimum somewhere in the region of medium exposure times, 1 to 2 ms. The above effect suggests a careful analysis of experimental data and, also, becomes an important restriction to the method. In this paper we attempt to make a phenomenological approach to the mechanisms leading to these effects. We suppose that effect is related both to scattering properties of scanned surface and to physical parameters of CCD sensors. The first factor becomes more important in the region of long exposure times, while second factor becomes more important in the region of short exposure times.

  14. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  15. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  16. RECONSTRUCTION, QUANTIFICATION, AND VISUALIZATION OF FOREST CANOPY BASED ON 3D TRIANGULATIONS OF AIRBORNE LASER SCANNING POINT DATA

    Directory of Open Access Journals (Sweden)

    J. Vauhkonen

    2015-03-01

    Full Text Available Reconstruction of three-dimensional (3D forest canopy is described and quantified using airborne laser scanning (ALS data with densities of 0.6–0.8 points m-2 and field measurements aggregated at resolutions of 400–900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i to optimize the degree of filtration with respect to the field measurements, and (ii to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2 with the stem volume considered, both alone (R2=0.65 and together with other predictors (R2=0.78. When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  17. Qualitative to quantitative: linked trajectory of method triangulation in a study on HIV/AIDS in Goa, India.

    Science.gov (United States)

    Bailey, Ajay; Hutter, Inge

    2008-10-01

    With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is frequently used to identify risk behaviours and adherence behaviour in the field of HIV/AIDS. Risk behaviour studies that apply HBM have been largely quantitative and use of qualitative methodology is rare. The marriage of qualitative and quantitative methods has never been easy. The challenge is in triangulating the methods. Method triangulation has been largely used to combine insights from the qualitative and quantitative methods but not to link both the methods. In this paper we suggest a linked trajectory of method triangulation (LTMT). The linked trajectory aims to first gather individual level information through in-depth interviews and then to present the information as vignettes in focus group discussions. We thus validate information obtained from in-depth interviews and gather emic concepts that arise from the interaction. We thus capture both the interpretation and the interaction angles of the qualitative method. Further, using the qualitative information gained, a survey is designed. In doing so, the survey questions are grounded and contextualized. We employed this linked trajectory of method triangulation in a study on the risk assessment of HIV/AIDS among migrant and mobile men. Fieldwork was carried out in Goa, India. Data come from two waves of studies, first an explorative qualitative study (2003), second a larger study (2004-2005), including in-depth interviews (25), focus group discussions (21) and a survey (n=1259). By employing the qualitative to quantitative LTMT we can not only contextualize the existing concepts of the HBM, but also validate new concepts and identify new risk groups.

  18. Improved laser-based triangulation sensor with enhanced range and resolution through adaptive optics-based active beam control.

    Science.gov (United States)

    Reza, Syed Azer; Khwaja, Tariq Shamim; Mazhar, Mohsin Ali; Niazi, Haris Khan; Nawab, Rahma

    2017-07-20

    Various existing target ranging techniques are limited in terms of the dynamic range of operation and measurement resolution. These limitations arise as a result of a particular measurement methodology, the finite processing capability of the hardware components deployed within the sensor module, and the medium through which the target is viewed. Generally, improving the sensor range adversely affects its resolution and vice versa. Often, a distance sensor is designed for an optimal range/resolution setting depending on its intended application. Optical triangulation is broadly classified as a spatial-signal-processing-based ranging technique and measures target distance from the location of the reflected spot on a position sensitive detector (PSD). In most triangulation sensors that use lasers as a light source, beam divergence-which severely affects sensor measurement range-is often ignored in calculations. In this paper, we first discuss in detail the limitations to ranging imposed by beam divergence, which, in effect, sets the sensor dynamic range. Next, we show how the resolution of laser-based triangulation sensors is limited by the interpixel pitch of a finite-sized PSD. In this paper, through the use of tunable focus lenses (TFLs), we propose a novel design of a triangulation-based optical rangefinder that improves both the sensor resolution and its dynamic range through adaptive electronic control of beam propagation parameters. We present the theory and operation of the proposed sensor and clearly demonstrate a range and resolution improvement with the use of TFLs. Experimental results in support of our claims are shown to be in strong agreement with theory.

  19. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  20. Triangulated Proxy Reporting: a technique for improving how communication partners come to know people with severe cognitive impairment.

    Science.gov (United States)

    Lyons, Gordon; De Bortoli, Tania; Arthur-Kelly, Michael

    2017-09-01

    This paper explains and demonstrates the pilot application of Triangulated Proxy Reporting (TPR); a practical technique for enhancing communication around people who have severe cognitive impairment (SCI). An introduction explains SCI and how this impacts on communication; and consequently on quality of care and quality of life. This is followed by an explanation of TPR and its origins in triangulation research techniques. An illustrative vignette explicates its utility and value in a group home for a resident with profound multiple disabilities. The Discussion and Conclusion sections propose the wider application of TPR for different cohorts of people with SCIs, their communication partners and service providers. TPR presents as a practical technique for enhancing communication interactions with people who have SCI. The paper demonstrates the potential of the technique for improving engagement amongst those with profound multiple disabilities, severe acquired brain injury and advanced dementia and their partners in and across different care settings. Implications for Rehabilitation Triangulated Proxy Reporting (TPR) shows potential to improve communications between people with severe cognitive impairments and their communication partners. TPR can lead to improved quality of care and quality of life for people with profound multiple disabilities, very advanced dementia and severe acquired brain injury, who otherwise are very difficult to support. TPR is a relatively simple and inexpensive technique that service providers can incorporate into practice to improving communications between clients with severe cognitive impairments, their carers and other support professionals.

  1. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  2. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  3. L1 Use in EFL Classes with English-only Policy: Insights from Triangulated Data

    Directory of Open Access Journals (Sweden)

    Seyyed Hatam Tamimi Sa’d

    2015-06-01

    Full Text Available This study examines the role of the use of the L1 in EFL classes from the perspective of EFL learners. The triangulated data were collected using class observations, focus group semi-structured interviews and the learners’ written reports of their perceptions and attitudes in a purpose-designed questionnaire. The participants consisted of sixty male Iranian EFL learners who constituted three classes. The results indicated a strong tendency among the participants toward L1 and its positive effects on language learning; while only a minority of the learners favoured an English-only policy, the majority supported the judicious, limited and occasional use of the L1, particularly on the part of the teacher. The participants mentioned the advantages as well as the disadvantages of the use/non-use of the L1. While the major advantage and the main purpose of L1 use was said to be the clarification and intelligibility of instructions, grammatical and lexical items, the main advantages of avoiding it were stated as being the improvement of speaking and listening skills, aximizing learners’ exposure to English and their becoming accustomed to it. The study concludes that, overall and in line with the majority of the previous research studies, a judicious, occasional and limited use of the L1 is a better approach to take in EFL classes than to include or exclude it totally. In conclusion, a re-examination of the English-only policy and a reconsideration of the role of the L1 are recommended. Finally, the commonly held assumption that L1 is a hindrance and an impediment to the learners’ language learning is challenged.

  4. Hand-held triangulation laser profilometer with audio output for blind people Profilométre laser à triangulation tenu en main avec sortie sonare pour non-voyants

    Science.gov (United States)

    Farcy, R.; Damaschini, R.

    1998-06-01

    We describe a device currently under industrial development which will give to the blind a means of three-dimensional space perception. It consists of a 350 g hand-held triangulating laser telemeter including electronic parts and batteries, with auditory feedback either inside the apparatus or close to the ear. The microprocessor unit converts in real time the distance measured by the telemeter into a musical note. Scanning the space with an adequate movement of the hand produces musical lines corresponding to the profiles of the environment. We discuss the optical configuration of the system relative to our first year of clinical experimentation.

  5. Source parameters for the 1952 Kern County earthquake, California: A joint inversion of leveling and triangulation observations

    OpenAIRE

    Bawden, Gerald W.

    2001-01-01

    Coseismic leveling and triangulation observations are used to determine the faulting geometry and slip distribution of the July 21, 1952, Mw 7.3 Kern County earthquake on the White Wolf fault. A singular value decomposition inversion is used to assess the ability of the geodetic network to resolve slip along a multisegment fault and shows that the network is sufficient to resolve slip along the surface rupture to a depth of 10 km. Below 10 km, the network can only resolve dip slip near the fa...

  6. Angles-Only Navigation: Position and Velocity Solution from Absolute Triangulation

    Science.gov (United States)

    2011-01-01

    contrast to the Kalman filter approach , the algorithm presented here does not require any pre- vious estimate of position or motion, and is of closed... geocentric position vectors. Using two vectors derived from each such observation (see next section), a solution for a portion of the boat’s track was...t)x0 describes the curvature of the path in the direction x 0, which, for a geocentric coordinate system and /(t) < 0, will be toward the center of

  7. Description of multiple processes on the basis of triangulation in the velocity space

    International Nuclear Information System (INIS)

    Baldin, A.M.; Baldin, A.A.

    1986-01-01

    A method of the construction of polyhedrons in the relative four-velocity space is suggested which gives a complete description of multiple processes. A method of the consideration of a general case, when the total number of the relative velocity variables exceeds the number of the degrees of freedom, is also given. The account of the particular features of the polyhedrons due to the clusterization in the velocity space, as well as the account of the existence of intermediate asymptotics and the correlation depletion principle makes it possible to propose an algorithm for processing much larger bulk of experimental information on multiple processes as compared to the inclusive approach

  8. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    OpenAIRE

    Carlos Jiménez de Parga; Sebastián Rubén Gómez Palomo

    2018-01-01

    This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which ...

  9. The structure of chromatic polynomials of planar triangulations and implications for chromatic zeros and asymptotic limiting quantities

    International Nuclear Information System (INIS)

    Shrock, Robert; Xu Yan

    2012-01-01

    We present an analysis of the structure and properties of chromatic polynomials P(G pt,m-vector, q) of one-parameter and multi-parameter families of planar triangulation graphs G pt,m-vector , where m-vector = (m 1 ,…,m p ) is a vector of integer parameters. We use these to study the ratio of |P(G pt,m-vector, τ+1)| to the Tutte upper bound (τ − 1) n−5 , where τ=(1+√5)/2 and n is the number of vertices in G pt,m-vector . In particular, we calculate limiting values of this ratio as n → ∞ for various families of planar triangulations. We also use our calculations to analyze zeros of these chromatic polynomials. We study a large class of families G pt,m-vector with p = 1 and p = 2 and show that these have a structure of the form P(G pt,m ,q) = c G pt ,1 λ 1 m + c G pt ,2 λ 2 m + c G pt ,3 λ 3 m for p = 1, where λ 1 = q − 2, λ 2 = q − 3, and λ 3 = −1, and P(G pt,m-vector ,q) =Σ i 1 =1 3 Σ i 2 =1 3 c G pt ,i 1 i 2 λ i 1 m 1 λ i 2 m 2 for p = 2. We derive properties of the coefficients c G pt ,i-vector and show that P(G pt,m-vector ,q) has a real chromatic zero that approaches (1/2)(3+√5) as one or more of the m i → ∞. The generalization to p ⩾ 3 is given. Further, we present a one-parameter family of planar triangulations with real zeros that approach 3 from below as m → ∞. Implications for the ground-state entropy of the Potts antiferromagnet are discussed. (paper)

  10. Discrepancies between qualitative and quantitative evaluation of randomised controlled trial results: achieving clarity through mixed methods triangulation.

    Science.gov (United States)

    Tonkin-Crine, Sarah; Anthierens, Sibyl; Hood, Kerenza; Yardley, Lucy; Cals, Jochen W L; Francis, Nick A; Coenen, Samuel; van der Velden, Alike W; Godycki-Cwirko, Maciek; Llor, Carl; Butler, Chris C; Verheij, Theo J M; Goossens, Herman; Little, Paul

    2016-05-12

    Mixed methods are commonly used in health services research; however, data are not often integrated to explore complementarity of findings. A triangulation protocol is one approach to integrating such data. A retrospective triangulation protocol was carried out on mixed methods data collected as part of a process evaluation of a trial. The multi-country randomised controlled trial found that a web-based training in communication skills (including use of a patient booklet) and the use of a C-reactive protein (CRP) point-of-care test decreased antibiotic prescribing by general practitioners (GPs) for acute cough. The process evaluation investigated GPs' and patients' experiences of taking part in the trial. Three analysts independently compared findings across four data sets: qualitative data collected view semi-structured interviews with (1) 62 patients and (2) 66 GPs and quantitative data collected via questionnaires with (3) 2886 patients and (4) 346 GPs. Pairwise comparisons were made between data sets and were categorised as agreement, partial agreement, dissonance or silence. Three instances of dissonance occurred in 39 independent findings. GPs and patients reported different views on the use of a CRP test. GPs felt that the test was useful in convincing patients to accept a no-antibiotic decision, but patient data suggested that this was unnecessary if a full explanation was given. Whilst qualitative data indicated all patients were generally satisfied with their consultation, quantitative data indicated highest levels of satisfaction for those receiving a detailed explanation from their GP with a booklet giving advice on self-care. Both qualitative and quantitative data sets indicated higher patient enablement for those in the communication groups who had received a booklet. Use of CRP tests does not appear to engage patients or influence illness perceptions and its effect is more centred on changing clinician behaviour. Communication skills and the patient

  11. Tle Triangulation Campaign by Japanese High School Students as a Space Educational Project of the Ssh Consortium Kochi

    Science.gov (United States)

    Yamamoto, Masa-Yuki; Okamoto, Sumito; Miyoshi, Terunori; Takamura, Yuzaburo; Aoshima, Akira; Hinokuchi, Jin

    As one of the space educational projects in Japan, a triangulation observation project of TLE (Transient Luminous Events: sprites, elves, blue-jets, etc.) has been carried out since 2006 in collaboration between 29 Super Science High-schools (SSH) and Kochi University of Technol-ogy (KUT). Following with previous success of sprite observations by "Astro High-school" since 2004, the SSH consortium Kochi was established as a national space educational project sup-ported by Japan Science and Technology Agency (JST). High-sensitivity CCD camera (Watec, Neptune-100) with 6 mm F/1.4 C-mount lens (Fujinon) and motion-detective software (UFO-Capture, SonotaCo) were given to each participating team in order to monitor Northern night sky of Japan with almost full-coverage. During each school year (from April to March in Japan) since 2006, thousands of TLE images were taken by many student teams, with considerably large numbers of successful triangulations, i.e., (School year, Numbers of TLE observations, Numbers of triangulations) are (2006, 43, 3), (2007, 441, 95), (2008, 734, 115), and (2009, 337, 78). Note that, school year in Japan begins on April 1 and ends on March 31. The observation campaign began in December 2006, numbers are as of Feb. 28, 2010. Recently, some high schools started wide field observations using multiple cameras, and others started VLF observations using handmade loop antennae and amplifiers. Infomation exchange among the SSH consortium Kochi is frequently communicated with scientific discussion via KUT's mailing lists. Also, interactions with amateur observers in Japan are made through an internet forum of "SonotaCo Network Japan" (http://sonotaco.jp). Not only as an educational project but also as a scientific one, the project is also in success. In February 2008, simultaneous observations of Elves were obtained, in November 2009 a Giant "Graft-shaped" Sprites driven by Jets was clearly imaged with VLF signals. Most recently, ob-servations of Elves

  12. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  13. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  14. Outcomes and impact of HIV prevention, ART and TB programs in Swaziland--early evidence from public health triangulation.

    Science.gov (United States)

    van Schalkwyk, Cari; Mndzebele, Sibongile; Hlophe, Thabo; Garcia Calleja, Jesus Maria; Korenromp, Eline L; Stoneburner, Rand; Pervilhac, Cyril

    2013-01-01

    Swaziland's severe HIV epidemic inspired an early national response since the late 1980s, and regular reporting of program outcomes since the onset of a national antiretroviral treatment (ART) program in 2004. We assessed effectiveness outcomes and mortality trends in relation to ART, HIV testing and counseling (HTC), tuberculosis (TB) and prevention of mother to child transmission (PMTCT). Data triangulated include intervention coverage and outcomes according to program registries (2001-2010), hospital admissions and deaths disaggregated by age and sex (2001-2010) and population mortality estimates from the 1997 and 2007 censuses and the 2007 demographic and health survey. By 2010, ART reached 70% of the estimated number of people living with HIV/AIDS with CD4impact to specific interventions (versus natural epidemic dynamics) will require additional data from future household surveys, and improved routine (program, surveillance, and hospital) data at district level.

  15. An evaluation of orthopaedic nurses’ participation in an educational intervention promoting research utilization – A triangulation convergence model

    DEFF Research Database (Denmark)

    Berthelsen, Connie Bøttcher; Hølge-Hazelton, Bibi

    2016-01-01

    Aims and objectives To describe the orthopaedic nurses' experiences regarding the relevance of an educational intervention and their personal and contextual barriers to participation in the intervention. Background One of the largest barriers against nurses' research usage in clinical practice...... is the lack of participation. A previous survey identified 32 orthopaedic nurses as interested in participating in nursing research. An educational intervention was conducted to increase the orthopaedic nurses' research knowledge and competencies. However, only an average of six nurses participated. Design...... A triangulation convergence model was applied through a mixed methods design to combine quantitative results and qualitative findings for evaluation. Methods Data were collected from 2013–2014 from 32 orthopaedic nurses in a Danish regional hospital through a newly developed 21-item questionnaire and two focus...

  16. Proposals for the Operationalisation of the Discourse Theory of Laclau and Mouffe Using a Triangulation of Lexicometrical and Interpretative Methods

    Directory of Open Access Journals (Sweden)

    Georg Glasze

    2007-05-01

    Full Text Available The discourse theory of Ernesto LACLAU and Chantal MOUFFE brings together three elements: the FOUCAULTian notion of discourse, the (post- MARXist notion of hegemony, and the poststructuralist writings of Jacques DERRIDA and Roland BARTHES. Discourses are regarded as temporary fixations of differential relations. Meaning, i.e. any social "objectivity", is conceptualised as an effect of such a fixation. The discussion on an appropriate operationalisation of such a discourse theory is just beginning. In this paper, it is argued that a triangulation of two linguistic methods is appropriate to reveal temporary fixations: by means of corpus-driven lexicometric procedures as well as by the analysis of narrative patterns, the regularities of the linkage of elements can be analysed (for example, in diachronic comparisons. The example of a geographic research project shows how, in so doing, the historically contingent constitution of an international community and "world region" can be analysed. URN: urn:nbn:de:0114-fqs0702143

  17. Health, utilisation of health services, 'core' information, and reasons for non-participation: a triangulation study amongst non-respondents.

    Science.gov (United States)

    Näslindh-Ylispangar, Anita; Sihvonen, Marja; Kekki, Pertti

    2008-11-01

    To explore health, use of health services, 'core' information and reasons for non-participation amongst males. Gender may provide an explanation for non-participation in the healthcare system. A growing body of research suggests that males are less likely than females to seek help from health professionals for their problems. The current research had its beginnings with the low response rate in a prior voluntary survey and health examination for Finnish males born in 1961. Data triangulation among 28 non-respondent middle-aged males in Helsinki was used. The methods involved structured and in-depth interviews and health measurements to explore the views of these males concerning their health-related behaviours and use of health services. Non-respondent males seldom used healthcare services. Despite clinical risk factors (e.g. obesity and blood pressure) and various symptoms, males perceived their health status as good. Work was widely experienced as excessively demanding, causing insomnia and other stress symptoms. Males expressed sensitive messages when a session was ending and when the participant was close to the door and leaving the room. This 'core' information included major causes of concern, anxiety, fears and loneliness. This triangulation study showed that by using an in-depth interview as one research strategy, more sensitive 'feminist' expressions in health and ill-health were got by men. The results emphasise a male's self-perception of his masculinity that may have relevance to the health experience of the male population. Nurses and physicians need to pay special attention to the requirements of gender-specific healthcare to be most effective in the delivery of healthcare to males.

  18. Triangulating case-finding tools for patient safety surveillance: a cross-sectional case study of puncture/laceration.

    Science.gov (United States)

    Taylor, Jennifer A; Gerwin, Daniel; Morlock, Laura; Miller, Marlene R

    2011-12-01

    To evaluate the need for triangulating case-finding tools in patient safety surveillance. This study applied four case-finding tools to error-associated patient safety events to identify and characterise the spectrum of events captured by these tools, using puncture or laceration as an example for in-depth analysis. Retrospective hospital discharge data were collected for calendar year 2005 (n=48,418) from a large, urban medical centre in the USA. The study design was cross-sectional and used data linkage to identify the cases captured by each of four case-finding tools. Three case-finding tools (International Classification of Diseases external (E) and nature (N) of injury codes, Patient Safety Indicators (PSI)) were applied to the administrative discharge data to identify potential patient safety events. The fourth tool was Patient Safety Net, a web-based voluntary patient safety event reporting system. The degree of mutual exclusion among detection methods was substantial. For example, when linking puncture or laceration on unique identifiers, out of 447 potential events, 118 were identical between PSI and E-codes, 152 were identical between N-codes and E-codes and 188 were identical between PSI and N-codes. Only 100 events that were identified by PSI, E-codes and N-codes were identical. Triangulation of multiple tools through data linkage captures potential patient safety events most comprehensively. Existing detection tools target patient safety domains differently, and consequently capture different occurrences, necessitating the integration of data from a combination of tools to fully estimate the total burden.

  19. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  20. Numerical convergence of discrete exterior calculus on arbitrary surface meshes

    KAUST Repository

    Mohamed, Mamdouh S.

    2018-02-13

    Discrete exterior calculus (DEC) is a structure-preserving numerical framework for partial differential equations solution, particularly suitable for simplicial meshes. A longstanding and widespread assumption has been that DEC requires special (Delaunay) triangulations, which complicated the mesh generation process especially for curved surfaces. This paper presents numerical evidence demonstrating that this restriction is unnecessary. Convergence experiments are carried out for various physical problems using both Delaunay and non-Delaunay triangulations. Signed diagonal definition for the key DEC operator (Hodge star) is adopted. The errors converge as expected for all considered meshes and experiments. This relieves the DEC paradigm from unnecessary triangulation limitation.

  1. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  2. Quadtree of TIN: a new algorithm of dynamic LOD

    Science.gov (United States)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  3. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    Science.gov (United States)

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  4. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  5. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  6. Optical profilometer using laser based conical triangulation for inspection of inner geometry of corroded pipes in cylindrical coordinates

    Science.gov (United States)

    Buschinelli, Pedro D. V.; Melo, João. Ricardo C.; Albertazzi, Armando; Santos, João. M. C.; Camerini, Claudio S.

    2013-04-01

    An axis-symmetrical optical laser triangulation system was developed by the authors to measure the inner geometry of long pipes used in the oil industry. It has a special optical configuration able to acquire shape information of the inner geometry of a section of a pipe from a single image frame. A collimated laser beam is pointed to the tip of a 45° conical mirror. The laser light is reflected in such a way that a radial light sheet is formed and intercepts the inner geometry and forms a bright laser line on a section of the inspected pipe. A camera acquires the image of the laser line through a wide angle lens. An odometer-based triggering system is used to shot the camera to acquire a set of equally spaced images at high speed while the device is moved along the pipe's axis. Image processing is done in real-time (between images acquisitions) thanks to the use of parallel computing technology. The measured geometry is analyzed to identify corrosion damages. The measured geometry and results are graphically presented using virtual reality techniques and devices as 3D glasses and head-mounted displays. The paper describes the measurement principles, calibration strategies, laboratory evaluation of the developed device, as well as, a practical example of a corroded pipe used in an industrial gas production plant.

  7. A triangulation approach to the identification of acute sector nurses' training needs for formal nurse practitioner status.

    Science.gov (United States)

    Hicks, C; Hennessy, D

    1998-01-01

    The current confusion surrounding the definition and role function of the nurse practitioner (NP) has created a situation in which advanced clinical practice is delivered in a variety of ways and at many levels. Not surprisingly, this has led to difficulties in regulating educational provision for NPs. This study reports a survey of the perceptions of the role definitions and training needs of all nurses working at advanced clinical levels within an acute sector Trust. Although this concept is not a novel one in advanced nursing practice, the procedure adopted differed from previous studies in two fundamental ways: firstly, a unique training needs assessment instrument was used, which because of its validity and opacity, was capable of yielding a highly reliable data-base, comprising a prioritized profile of real training needs as opposed to the standard wish-list typically elicited. Secondly, it did not rely simply on the self-reported needs of the nurse sample, but also included the perceptions of the sample's immediate medical and managerial colleagues. In this way, a triangulation paradigm was adopted. The results indicated that overall, there was high agreement between the nurses and their managers, regarding both the definition of the NP role and the essential training requirements, with somewhat different opinions being offered by the medical staff. When the raw scores were standardized to correct for response bias, the data provided an operational definition of the role of the NP and a prioritized profile of training needs for nurses who wished to train to this level.

  8. Stakeholder management in the local government decision-making area: evidences from a triangulation study with the English local government

    Directory of Open Access Journals (Sweden)

    Ricardo Corrêa Gomes

    2006-01-01

    Full Text Available The stakeholder theory has been in the management agenda for about thirty years and reservations about its acceptance as a comprehensive theory still remains. It was introduced as a managerial issue by the Labour Party in 1997 aiming to make public management more inclusive. This article aims to contribute to the stakeholder theory adding descriptive issues to its theoretical basis. The findings are derived from an inductive investigationcarried out with English Local Authorities, which will most likely be reproduced in other contexts. Data collection and analysis is based on a data triangulation method that involves case-studies, interviews of validation and analysis of documents. The investigation proposes a model for representing the nature of therelationships between stakeholders and the decision-making process of such organizations. The decision-making of local government organizations is in fact a stakeholder-based process in which stakeholders are empowered to exert influences due to power over and interest in the organization’s operations and outcomes.

  9. A range-based predictive localization algorithm for WSID networks

    Science.gov (United States)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  10. Visualization of 2-D and 3-D fields from its value in a finite number of points

    International Nuclear Information System (INIS)

    Dari, E.A.; Venere, M.J.

    1990-01-01

    This work describes a method for the visualization of two- and three-dimensional fields, given its value in a finite number of points. These data can be originated in experimental measurements, numerical results, or any other source. For the field interpolation, the space is divided into simplices (triangles or tetrahedrons), using the Watson algorithm to obtain the Delaunay triangulation. Inside each simplex, linear interpolation is assumed. The visualization is accomplished by means of Finite Elements post-processors, capable of handling unstructured meshes, which were also developed by the authors. (Author) [es

  11. Options for a health system researcher to choose in Meta Review (MR approaches-Meta Narrative (MN and Meta Triangulation (MT

    Directory of Open Access Journals (Sweden)

    Sanjeev Davey

    2015-01-01

    Full Text Available Two new approaches in systematic reviewing i.e. Meta-narrative review(MNR (which a health researcher can use for topics which are differently conceptualized and studied by different types of researchers for policy decisions and Meta-triangulation review(MTR (done to build theory for studying multifaceted phenomena characterized by expansive and contested research domains are ready for penetration in an arena of health system research. So critical look at which approach in Meta-review is better i.e. Meta-narrative review or Meta-triangulation review, can give new insights to a health system researcher. A systematic review on 2 key words-"meta-narrative review" and "meta-triangulation review" in health system research, were searched from key search engines, such as Pubmed, Cochrane library, Bio-med Central and Google Scholar etc till 21st March 2014 since last 20 years. Studies from both developed and developing world were included in any form and scope to draw final conclusions. However unpublished data from thesis was not included in systematic review. Meta-narrative review is a type of systematic review which can be used for a wide range of topics and questions involving making judgments and inferences in public health. On the other hand Meta-triangulation review is a three-phased, qualitative meta-analysis process which can be used to explore variations in the assumptions of alternative paradigms, gain insights into these multiple paradigms at one point of time and addresses emerging themes and the resulting theories.

  12. Assessment of behavioral changes associated with oral meloxicam administration at time of dehorning in calves using a remote triangulation device and accelerometers

    Directory of Open Access Journals (Sweden)

    Theurer Miles E

    2012-04-01

    Full Text Available Abstract Background Dehorning is common in the cattle industry, and there is a need for research evaluating pain mitigation techniques. The objective of this study was to determine the effects of oral meloxicam, a non-steroidal anti-inflammatory, on cattle behavior post-dehorning by monitoring the percent of time spent standing, walking, and lying in specific locations within the pen using accelerometers and a remote triangulation device. Twelve calves approximately ten weeks of age were randomized into 2 treatment groups (meloxicam or control in a complete block design by body weight. Six calves were orally administered 0.5 mg/kg meloxicam at the time of dehorning and six calves served as negative controls. All calves were dehorned using thermocautery and behavior of each calf was continuously monitored for 7 days after dehorning using accelerometers and a remote triangulation device. Accelerometers monitored lying behavior and the remote triangulation device was used to monitor each calf’s movement within the pen. Results Analysis of behavioral data revealed significant interactions between treatment (meloxicam vs. control and the number of days post dehorning. Calves that received meloxicam spent more time at the grain bunk on trial days 2 and 6 post-dehorning; spent more time lying down on days 1, 2, 3, and 4; and less time at the hay feeder on days 0 and 1 compared to the control group. Meloxicam calves tended to walk more at the beginning and end of the trial compared to the control group. By day 5, the meloxicam and control group exhibited similar behaviors. Conclusions The noted behavioral changes provide evidence of differences associated with meloxicam administration. More studies need to be performed to evaluate the relationship of behavior monitoring and post-operative pain. To our knowledge this is the first published report demonstrating behavioral changes following dehorning using a remote triangulation device in conjunction

  13. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  14. Outcomes and Impact of HIV Prevention, ART and TB Programs in Swaziland – Early Evidence from Public Health Triangulation

    Science.gov (United States)

    van Schalkwyk, Cari; Mndzebele, Sibongile; Hlophe, Thabo; Garcia Calleja, Jesus Maria; Korenromp, Eline L.; Stoneburner, Rand; Pervilhac, Cyril

    2013-01-01

    Introduction Swaziland’s severe HIV epidemic inspired an early national response since the late 1980s, and regular reporting of program outcomes since the onset of a national antiretroviral treatment (ART) program in 2004. We assessed effectiveness outcomes and mortality trends in relation to ART, HIV testing and counseling (HTC), tuberculosis (TB) and prevention of mother to child transmission (PMTCT). Methods Data triangulated include intervention coverage and outcomes according to program registries (2001-2010), hospital admissions and deaths disaggregated by age and sex (2001-2010) and population mortality estimates from the 1997 and 2007 censuses and the 2007 demographic and health survey. Results By 2010, ART reached 70% of the estimated number of people living with HIV/AIDS with CD4<350/mm3, with progressively improving patient retention and survival. As of 2010, 88% of health facilities providing antenatal care offered comprehensive PMTCT services. The HTC program recorded a halving in the proportion of adults tested who were HIV-infected; similarly HIV infection rates among HIV-exposed babies halved from 2007 to 2010. Case fatality rates among hospital patients diagnosed with HIV/AIDS started to decrease from 2005–6 in adults and especially in children, contrasting with stable case fatality for other causes including TB. All-cause child in-patient case fatality rates started to decrease from 2005–6. TB case notifications as well as rates of HIV/TB co-infection among notified TB patients continued a steady increase through 2010, while coverage of HIV testing and CPT for co-infected patients increased to above 80%. Conclusion Against a background of high, but stable HIV prevalence and decreasing HIV incidence, we documented early evidence of a mortality decline associated with the expanded national HIV response since 2004. Attribution of impact to specific interventions (versus natural epidemic dynamics) will require additional data from future

  15. Acceptability of Parental Financial Incentives and Quasi-Mandatory Interventions for Preschool Vaccinations: Triangulation of Findings from Three Linked Studies.

    Directory of Open Access Journals (Sweden)

    Jean Adams

    Full Text Available Childhood vaccinations are a core component of public health programmes globally. Recent measles outbreaks in the UK and USA have prompted debates about new ways to increase uptake of childhood vaccinations. Parental financial incentives and quasi-mandatory interventions (e.g. restricting entry to educational settings to fully vaccinated children have been successfully used to increase uptake of childhood vaccinations in developing countries, but there is limited evidence of effectiveness in developed countries. Even if confirmed to be effective, widespread implementation of these interventions is dependent on acceptability to parents, professionals and other stakeholders.We conducted a systematic review (n = 11 studies included, a qualitative study with parents (n = 91 and relevant professionals (n = 24, and an on-line survey with embedded discrete choice experiment with parents (n = 521 exploring acceptability of parental financial incentives and quasi-mandatory interventions for preschool vaccinations. Here we use Triangulation Protocol to synthesise findings from the three studies.There was a consistent recognition that incentives and quasi-mandatory interventions could be effective, particularly in more disadvantaged groups. Universal incentives were consistently preferred to targeted ones, but relative preferences for quasi-mandatory interventions and universal incentives varied between studies. The qualitative work revealed a consistent belief that financial incentives were not considered an appropriate motivation for vaccinating children. The costs of financial incentive interventions appeared particularly salient and there were consistent concerns in the qualitative work that incentives did not represent the best use of resources for promoting preschool vaccinations. Various suggestions for improving delivery of the current UK vaccination programme as an alternative to incentives and quasi-mandates were made.Parental financial

  16. Target-type probability combining algorithms for multisensor tracking

    Science.gov (United States)

    Wigren, Torbjorn

    2001-08-01

    Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.

  17. AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  18. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  19. Potentiation of E-4031-induced torsade de pointes by HMR1556 or ATX-II is not predicted by action potential short-term variability or triangulation.

    Science.gov (United States)

    Michael, G; Dempster, J; Kane, K A; Coker, S J

    2007-12-01

    Torsade de pointes (TdP) can be induced by a reduction in cardiac repolarizing capacity. The aim of this study was to assess whether IKs blockade or enhancement of INa could potentiate TdP induced by IKr blockade and to investigate whether short-term variability (STV) or triangulation of action potentials preceded TdP. Experiments were performed in open-chest, pentobarbital-anaesthetized, alpha 1-adrenoceptor-stimulated, male New Zealand White rabbits, which received three consecutive i.v. infusions of either the IKr blocker E-4031 (1, 3 and 10 nmol kg(-1) min(-1)), the IKs blocker HMR1556 (25, 75 and 250 nmol kg(-1) min(-1)) or E-4031 and HMR1556 combined. In a second study rabbits received either the same doses of E-4031, the INa enhancer, ATX-II (0.4, 1.2 and 4.0 nmol kg(-1)) or both of these drugs. ECGs and epicardial monophasic action potentials were recorded. HMR1556 alone did not cause TdP but increased E-4031-induced TdP from 25 to 80%. ATX-II alone caused TdP in 38% of rabbits, as did E-4031; 75% of rabbits receiving both drugs had TdP. QT intervals were prolonged by all drugs but the extent of QT prolongation was not related to the occurrence of TdP. No changes in STV were detected and triangulation was only increased after TdP occurred. Giving modulators of ion channels in combination substantially increased TdP but, in this model, neither STV nor triangulation of action potentials could predict TdP.

  20. The Study Related to the Execution of a Triangulation Network in the Dump of Rovinari Pit, in Order to be Restored to the Economic Circuit

    Directory of Open Access Journals (Sweden)

    George Popescu

    2016-11-01

    Full Text Available The lignite mining extraction within the mining perimeter in Rovinari is carried out through mining works in the open, by using large equipments for the excavation, transport and storage of the mining material. These surfaces are currently being set up in the area of level two of the dump, the west and north-west part of Rovinari pit. In order to carry out the set-up works and of follow-up of the stability of the pit levels it is necessary to maintain the triangulation network.

  1. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  2. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  3. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  4. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  5. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  6. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  7. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  8. A computational geometry approach to pore network construction for granular packings

    Science.gov (United States)

    van der Linden, Joost H.; Sufian, Adnan; Narsilio, Guillermo A.; Russell, Adrian R.; Tordesillas, Antoinette

    2018-03-01

    Pore network construction provides the ability to characterize and study the pore space of inhomogeneous and geometrically complex granular media in a range of scientific and engineering applications. Various approaches to the construction have been proposed, however subtle implementational details are frequently omitted, open access to source code is limited, and few studies compare multiple algorithms in the context of a specific application. This study presents, in detail, a new pore network construction algorithm, and provides a comprehensive comparison with two other, well-established Delaunay triangulation-based pore network construction methods. Source code is provided to encourage further development. The proposed algorithm avoids the expensive non-linear optimization procedure in existing Delaunay approaches, and is robust in the presence of polydispersity. Algorithms are compared in terms of structural, geometrical and advanced connectivity parameters, focusing on the application of fluid flow characteristics. Sensitivity of the various networks to permeability is assessed through network (Stokes) simulations and finite-element (Navier-Stokes) simulations. Results highlight strong dependencies of pore volume, pore connectivity, throat geometry and fluid conductance on the degree of tetrahedra merging and the specific characteristics of the throats targeted by the merging algorithm. The paper concludes with practical recommendations on the applicability of the three investigated algorithms.

  9. Research on the Perforating Algorithm Based on STL Files

    Science.gov (United States)

    Yuchuan, Han; Xianfeng, Zhu; Yunrui, Bai; Zhiwen, Wu

    2018-04-01

    In the process of making medical personalized external fixation brace, the 3D data file should be perforated to increase the air permeability and reduce the weight. In this paper, a perforating algorithm for 3D STL file is proposed, which can perforate holes, hollow characters and engrave decorative patterns on STL files. The perforating process is composed of three steps. Firstly, make the imaginary space surface intersect with the STL model, and reconstruct triangles at the intersection. Secondly, delete the triangular facets inside the space surface and make a hole on the STL model. Thirdly, triangulate the inner surface of the hole, and thus realize the perforating. Choose the simple space equations such as cylindrical and rectangular prism equations as perforating equations can perforate round holes and rectangular holes. Through the combination of different holes, lettering, perforating decorative patterns and other perforated results can be accomplished. At last, an external fixation brace and an individual pen container were perforated holes using the algorithm, and the expected results were reached, which proved the algorithm is feasible.

  10. Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.

    Science.gov (United States)

    Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai

    2005-10-01

    A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.

  11. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  12. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  13. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  14. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  15. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  16. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  17. Effect of DEM resolution on rainfall-triggered landslide modeling within a triangulated network-based model. A case study in the Luquillo Forest, Puerto Rico

    Science.gov (United States)

    Arnone, E.; Dialynas, Y. G.; Noto, L. V.; Bras, R. L.

    2013-12-01

    Catchment slope distribution is one of the topographic characteristics that significantly control rainfall-triggered landslide modeling, in both direct and indirect ways. Slope directly determines the soil volume associated with instability. Indirectly slope also affects the subsurface lateral redistribution of soil moisture across the basin, which in turn determines the water pore pressure conditions that impact slope stability. In this study, we investigate the influence of DEM resolution on slope stability and the slope stability analysis by using a distributed eco-hydrological and landslide model, the tRIBS-VEGGIE (Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator - VEGetation Generator for Interactive Evolution). The model implements a triangulated irregular network to describe the topography, and it is capable of evaluating vegetation dynamics and predicting shallow landslides triggered by rainfall. The impact of DEM resolution on the landslide prediction was studied using five TINs derived from five grid DEMs at different resolutions, i.e. 10, 20, 30, 50 and 70 m respectively. The analysis was carried out on the Mameyes Basin, located in the Luquillo Experimental Forest in Puerto Rico, where previous landslide analyses have been carried out. Results showed that the use of the irregular mesh reduced the loss of accuracy in the derived slope distribution when coarser resolutions were used. The impact of the different resolutions on soil moisture patterns was important only when the lateral redistribution was considerable, depending on hydrological properties and rainfall forcing. In some cases, the use of different DEM resolutions did not significantly affect tRIBS-VEGGIE landslide output, in terms of landslide locations, and values of slope and soil moisture at failure.

  18. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  19. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  20. Depth Measurement Based on Infrared Coded Structured Light

    Directory of Open Access Journals (Sweden)

    Tong Jia

    2014-01-01

    Full Text Available Depth measurement is a challenging problem in computer vision research. In this study, we first design a new grid pattern and develop a sequence coding and decoding algorithm to process the pattern. Second, we propose a linear fitting algorithm to derive the linear relationship between the object depth and pixel shift. Third, we obtain depth information on an object based on this linear relationship. Moreover, 3D reconstruction is implemented based on Delaunay triangulation algorithm. Finally, we utilize the regularity of the error curves to correct the system errors and improve the measurement accuracy. The experimental results show that the accuracy of depth measurement is related to the step length of moving object.

  1. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  2. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  3. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  4. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  5. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  6. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  7. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  8. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  9. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  10. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  11. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  12. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  13. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  14. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  15. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  16. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  17. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  18. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  19. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  20. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  1. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  2. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  3. Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter

    Directory of Open Access Journals (Sweden)

    Edson A. Mitishita

    2005-12-01

    Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.

  4. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  5. Applications of the Integrated High-Performance CMOS Image Sensor to Range Finders — from Optical Triangulation to the Automotive Field

    Directory of Open Access Journals (Sweden)

    Joe-Air Jiang

    2008-03-01

    Full Text Available With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.

  6. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  7. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  8. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  9. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  10. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  11. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  12. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  13. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  14. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  15. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  16. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  17. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  18. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  19. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  20. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  1. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    Science.gov (United States)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  2. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  3. Can Source Triangulation Be Used to Overcome Limitations of Self-Assessments? Assessing Educational Needs and Professional Competence of Pharmacists Practicing in Qatar.

    Science.gov (United States)

    Kheir, Nadir; Al-Ismail, Muna Said; Al-Nakeeb, Reem

    2017-01-01

    Continuing professional development activities should be designed to meet the identified personal goals of the learner. This article aims to explore the self-perceived competency levels and the professional educational needs of pharmacists in Qatar and to compare these with observations of pharmacy students undergoing experiential training in pharmacies (students) and pharmacy academics, directors, and managers (managers). Three questionnaires were developed and administered to practicing pharmacists, undergraduate pharmacy students who have performed structured experiential training rotations in multiple pharmacy outlets in Qatar and pharmacy managers. The questionnaires used items extracted from the National Association of Pharmacy Regulatory Authorities (NAPRA) Professional competencies for Canadian pharmacists at entry to practice and measured self- and observed pharmacists' competency and satisfaction with competency level. Training and educational needs were similar between the pharmacists and observers, although there was trend for pharmacists to choose more fact-intensive topics compared with observers whose preferences were toward practice areas. There was no association between the competency level of pharmacists as perceived by observers and as self-assessed by pharmacists (P ≤ .05). Pharmacists' self-assessed competency level was consistently higher than that reported by students (P ≤ .05). The results suggest that the use of traditional triangulation might not be sufficient to articulate the professional needs and competencies of practicing pharmacists as part of a strategy to build continuing professional development programs. Pharmacists might have a limited ability to accurately self-assess, and observer assessments might be significantly different from self-assessments which present a dilemma on which assessment to consider closer to reality. The processes currently used to evaluate competence may need to be enhanced through the use of well

  4. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping

    Directory of Open Access Journals (Sweden)

    Antero Kukko

    2008-09-01

    Full Text Available Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  5. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  6. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  7. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  8. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  9. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  10. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  11. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  12. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  13. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  14. A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.

    Science.gov (United States)

    Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea

    2018-05-08

    With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.

  15. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  16. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  17. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  18. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  19. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  20. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  1. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  2. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  3. New method of three-dimensional reconstruction from two-dimensional MR data sets

    International Nuclear Information System (INIS)

    Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.

    1989-01-01

    In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations

  4. On Using Particle Finite Element for Hydrodynamics Problems Solving

    Directory of Open Access Journals (Sweden)

    E. V. Davidova

    2015-01-01

    Full Text Available The aim of the present research is to develop software for the Particle Finite Element Method (PFEM and its verification on the model problem of viscous incompressible flow simulation in a square cavity. The Lagrangian description of the medium motion is used: the nodes of the finite element mesh move together with the fluid that allows to consider them as particles of the medium. Mesh cells deform when in time-stepping procedure, so it is necessary to reconstruct the mesh to provide stability of the finite element numerical procedure.Meshing algorithm allows us to obtain the mesh, which satisfies the Delaunay criteria: it is called \\the possible triangles method". This algorithm is based on the well-known Fortune method of Voronoi diagram constructing for a certain set of points in the plane. The graphical representation of the possible triangles method is shown. It is suitable to use generalization of Delaunay triangulation in order to construct meshes with polygonal cells in case of multiple nodes close to be lying on the same circle.The viscous incompressible fluid flow is described by the Navier | Stokes equations and the mass conservation equation with certain initial and boundary conditions. A fractional steps method, which allows us to avoid non-physical oscillations of the pressure, provides the timestepping procedure. Using the finite element discretization and the Bubnov | Galerkin method allows us to carry out spatial discretization.For form functions calculation of finite element mesh with polygonal cells, \

  5. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  6. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  7. A propositional CONEstrip algorithm

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)

    2014-01-01

    textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations

  8. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  9. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  10. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  11. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  12. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  13. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  14. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  15. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  16. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  17. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  18. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  19. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  20. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  1. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  2. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  3. Treatment Algorithm for Ameloblastoma

    Directory of Open Access Journals (Sweden)

    Madhumati Singh

    2014-01-01

    Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.

  4. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  5. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  6. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  7. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  8. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  9. Quantum gate decomposition algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  10. KAM Tori Construction Algorithms

    Science.gov (United States)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  11. Irregular Applications: Architectures & Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  12. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  13. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  14. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  15. Refueling Stop Activity Detection and Gas Station Extraction Using Crowdsourcing Vehicle Trajectory Data

    Directory of Open Access Journals (Sweden)

    YANG Wei

    2017-07-01

    Full Text Available In view of the deficiencies of current surveying methods of gas station, an approach is proposed to extract gas station from vehicle traces. Firstly, the spatial-temporal characteristics of individual and collective refueling behavior of trajectory is analyzed from aspects of movement features and geometric patterns. Secondly, based on Stop/Move model, the velocity sequence linear clustering algorithm is proposed to extract refueling stop tracks. Finally, using the methods including Delaunay triangulation, Fourier shape recognition and semantic constraints to identify and extract gas station. An experiment using 7 days taxi GPS traces in Beijing verified the novel method. The experimental results of 482 gas stations are extracted and the correct rate achieves to 93.1%.

  16. Perceptually stable regions for arbitrary polygons.

    Science.gov (United States)

    Rocha, J

    2003-01-01

    Zou and Yan have recently developed a skeletonization algorithm of digital shapes based on a regularity/singularity analysis; they use the polygon whose vertices are the boundary pixels of the image to compute a constrained Delaunay triangulation (CDT) in order to find local symmetries and stable regions. Their method has produced good results but it is slow since its complexity depends on the number of contour pixels. This paper presents an extension of their technique to handle arbitrary polygons, not only polygons of short edges. Consequently, not only can we achieve results as good as theirs for digital images, but we can also compute skeletons of polygons of any number of edges. Since we can handle polygonal approximations of figures, the skeletons are more resilient to noise and faster to process.

  17. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  18. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  19. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  20. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  1. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  2. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  3. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  4. Honing process optimization algorithms

    Science.gov (United States)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  5. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  6. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  7. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  8. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  9. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  10. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  11. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  12. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  13. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  14. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  15. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  16. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  17. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  18. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  19. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  20. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  1. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  2. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  3. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  4. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  5. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  6. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  7. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  8. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  9. Efficient Algorithms for Real-Time GPU Volumetric Cloud Rendering with Enhanced Geometry

    Directory of Open Access Journals (Sweden)

    Carlos Jiménez de Parga

    2018-04-01

    Full Text Available This paper presents several new techniques for volumetric cloud rendering using efficient algorithms and data structures based on ray-tracing methods for cumulus generation, achieving an optimum balance between realism and performance. These techniques target applications such as flight simulations, computer games, and educational software, even with conventional graphics hardware. The contours of clouds are defined by implicit mathematical expressions or triangulated structures inside which volumetric rendering is performed. Novel techniques are used to reproduce the asymmetrical nature of clouds and the effects of light-scattering, with low computing costs. The work includes a new method to create randomized fractal clouds using a recursive grammar. The graphical results are comparable to those produced by state-of-the-art, hyper-realistic algorithms. These methods provide real-time performance, and are superior to particle-based systems. These outcomes suggest that our methods offer a good balance between realism and performance, and are suitable for use in the standard graphics industry.

  10. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  11. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  12. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  13. A fast fractional difference algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    2014-01-01

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  14. A Fast Fractional Difference Algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  15. A Distributed Spanning Tree Algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  16. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  17. Tau reconstruction and identification algorithm

    Indian Academy of Sciences (India)

    CMS has developed sophisticated tau identification algorithms for tau hadronic decay modes. Production of tau lepton decaying to hadrons are studied at 7 TeV centre-of-mass energy with 2011 collision data collected by CMS detector and has been used to measure the performance of tau identification algorithms by ...

  18. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  19. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  20. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  1. A distributed spanning tree algorithm

    DEFF Research Database (Denmark)

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...

  2. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  3. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  4. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  5. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  6. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  7. Linear feature detection algorithm for astronomical surveys - I. Algorithm description

    Science.gov (United States)

    Bektešević, Dino; Vinković, Dejan

    2017-11-01

    Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.

  8. The Dropout Learning Algorithm

    Science.gov (United States)

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  9. Long-term versus short-term deformation of the meizoseismal area of the 2008 Achaia-Elia (MW 6.4) earthquake in NW Peloponnese, Greece: Evidence from historical triangulation and morphotectonic data

    Science.gov (United States)

    Stiros, Stathis; Moschas, Fanis; Feng, Lujia; Newman, Andrew

    2013-04-01

    The deformation of the meizoseismal area of the 2008 Achaia-Elia (MW 6.4) earthquake in NW Peloponnese, of the first significant strike slip earthquake in continental Greece, was examined in two time scales; of 102 years, based on the analysis of high-accuracy historical triangulation data describing shear, and of 105-106 years, based on the analysis of the hydrographic network of the area for signs of streams offset by faulting. Our study revealed pre-seismic accumulation of shear strain of the order of 0.2 μrad/year in the study area, consistent with recent GPS evidence, but no signs of significant strike slip-induced offsets in the hydrographic network. These results confirm the hypothesis that the 2008 fault, which did not reached the surface and was not associated with significant seismic ground deformation, probably because of a surface flysch layer filtering high-strain events, was associated with an immature or a dormant, recently activated fault. This fault, about 150 km long and discordant to the morphotectonic trends of the area, seems first, to contain segments which have progressively reactivated in a specific direction in the last 20 years, reminiscent of the North Anatolian Fault, and second, to limit an 150 km wide (recent?) shear zone in the internal part of the arc, in a region mostly dominated by thrust faulting and strong destructive earthquakes. Deformation of the first main strike slip fault in continental Greece analyzed. Triangulation data show preseismic shear, hydrographic net no previous faulting. Surface shear deformation only in low strain rates. Immature or reactivated dormant strike slip fault, with gradual oriented rupturing. Interplay between shear and thrusting along the arc.

  10. A physics-based algorithm for real-time simulation of electrosurgery procedures in minimally invasive surgery.

    Science.gov (United States)

    Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu

    2014-12-01

    High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Approximation and geomatric modeling with simplex B-splines associates with irregular triangular

    NARCIS (Netherlands)

    Auerbach, S.; Gmelig Meyling, R.H.J.; Neamtu, M.; Neamtu, M.; Schaeben, H.

    1991-01-01

    Bivariate quadratic simplical B-splines defined by their corresponding set of knots derived from a (suboptimal) constrained Delaunay triangulation of the domain are employed to obtain a C1-smooth surface. The generation of triangle vertices is adjusted to the areal distribution of the data in the

  12. Lagrangian fluid dynamics using the Voronoi-Delauanay mesh

    International Nuclear Information System (INIS)

    Dukowicz, J.K.

    1981-01-01

    A Lagrangian technique for numerical fluid dynamics is described. This technique makes use of the Voronoi mesh to efficiently locate new neighbors, and it uses the dual (Delaunay) triangulation to define computational cells. This removes all topological restrictions and facilitates the solution of problems containing interfaces and multiple materials. To improve computational accuracy a mesh smoothing procedure is employed

  13. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Chen, Li; Zhang, Xiaopeng; Deussen, Oliver; Wonka, Peter

    2016-01-01

    -distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform

  14. Lines of landscape organisation

    DEFF Research Database (Denmark)

    Løvschal, Mette

    2015-01-01

    This paper offers a landscape analysis of the earliest linear landscape boundaries on Skovbjerg Moraine, Denmark, during the first millennium BC. Using Delaunay triangulation as well as classic distribution analyses, it demonstrates that landscape boundaries articulated already established use-pa...

  15. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  16. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  17. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  18. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  19. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  20. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  1. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  2. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  3. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  4. Optimally stopped variational quantum algorithms

    Science.gov (United States)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  5. Fluid-structure-coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure, and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed here have been extended to three dimensions and implemented in the computer code PELE-3D

  6. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  7. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  8. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  9. Aggregation Algorithms in Heterogeneous Tables

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2006-01-01

    Full Text Available The heterogeneous tables are most used in the problem of aggregation. A solution for this problem is to standardize these tables of figures. In this paper, we proposed some methods of aggregation based on the hierarchical algorithms.

  10. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  11. A filtered backprojection algorithm with characteristics of the iterative landweber algorithm

    OpenAIRE

    L. Zeng, Gengsheng

    2012-01-01

    Purpose: In order to eventually develop an analytical algorithm with noise characteristics of an iterative algorithm, this technical note develops a window function for the filtered backprojection (FBP) algorithm in tomography that behaves as an iterative Landweber algorithm.

  12. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  13. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  14. Algorithms and Public Service Media

    OpenAIRE

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a pra...

  15. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  16. Algorithm for programming function generators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1981-01-01

    The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described

  17. Neutronic rebalance algorithms for SIMMER

    International Nuclear Information System (INIS)

    Soran, P.D.

    1976-05-01

    Four algorithms to solve the two-dimensional neutronic rebalance equations in SIMMER are investigated. Results of the study are presented and indicate that a matrix decomposition technique with a variable convergence criterion is the best solution algorithm in terms of accuracy and calculational speed. Rebalance numerical stability problems are examined. The results of the study can be applied to other neutron transport codes which use discrete ordinates techniques

  18. Euclidean shortest paths exact or approximate algorithms

    CERN Document Server

    Li, Fajie

    2014-01-01

    This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.

  19. A Global algorithm for linear radiosity

    OpenAIRE

    Sbert Cassasayas, Mateu; Pueyo Sánchez, Xavier

    1993-01-01

    A linear algorithm for radiosity is presented, linear both in time and storage. The new algorithm is based on previous work by the authors and on the well known algorithms for progressive radiosity and Monte Carlo particle transport.

  20. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  1. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  2. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  3. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  4. Algorithms as fetish: Faith and possibility in algorithmic work

    Directory of Open Access Journals (Sweden)

    Suzanne L Thomas

    2018-01-01

    Full Text Available Algorithms are powerful because we invest in them the power to do things. With such promise, they can transform the ordinary, say snapshots along a robotic vacuum cleaner’s route, into something much more, such as a clean home. Echoing David Graeber’s revision of fetishism, we argue that this easy slip from technical capabilities to broader claims betrays not the “magic” of algorithms but rather the dynamics of their exchange. Fetishes are not indicators of false thinking, but social contracts in material form. They mediate emerging distributions of power often too nascent, too slippery or too disconcerting to directly acknowledge. Drawing primarily on 2016 ethnographic research with computer vision professionals, we show how faith in what algorithms can do shapes the social encounters and exchanges of their production. By analyzing algorithms through the lens of fetishism, we can see the social and economic investment in some people’s labor over others. We also see everyday opportunities for social creativity and change. We conclude that what is problematic about algorithms is not their fetishization but instead their stabilization into full-fledged gods and demons – the more deserving objects of critique.

  5. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  6. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  7. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  8. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  9. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  10. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  11. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  12. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  13. Gossip algorithms in quantum networks

    International Nuclear Information System (INIS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  14. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  15. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  16. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  17. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  18. Gossip algorithms in quantum networks

    Energy Technology Data Exchange (ETDEWEB)

    Siomau, Michael, E-mail: siomau@nld.ds.mpg.de [Physics Department, Jazan University, P.O. Box 114, 45142 Jazan (Saudi Arabia); Network Dynamics, Max Planck Institute for Dynamics and Self-Organization (MPIDS), 37077 Göttingen (Germany)

    2017-01-23

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up – in the best case exponentially – the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication. - Highlights: • We analyze the performance of gossip algorithms in quantum networks. • Local operations and classical communication (LOCC) can speed the performance up. • The speed-up is exponential in the best case; the number of LOCC is polynomial.

  19. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  20. Algorithms for Protein Structure Prediction

    DEFF Research Database (Denmark)

    Paluszewski, Martin

    -trace. Here we present three different approaches for reconstruction of C-traces from predictable measures. In our first approach [63, 62], the C-trace is positioned on a lattice and a tabu-search algorithm is applied to find minimum energy structures. The energy function is based on half-sphere-exposure (HSE......) is more robust than standard Monte Carlo search. In the second approach for reconstruction of C-traces, an exact branch and bound algorithm has been developed [67, 65]. The model is discrete and makes use of secondary structure predictions, HSE, CN and radius of gyration. We show how to compute good lower...... bounds for partial structures very fast. Using these lower bounds, we are able to find global minimum structures in a huge conformational space in reasonable time. We show that many of these global minimum structures are of good quality compared to the native structure. Our branch and bound algorithm...

  1. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. A generalization of Takane's algorithm for DEDICOM

    NARCIS (Netherlands)

    Kiers, Henk A.L.; ten Berge, Jos M.F.; Takane, Yoshio; de Leeuw, Jan

    An algorithm is described for fitting the DEDICOM model for the analysis of asymmetric data matrices. This algorithm generalizes an algorithm suggested by Takane in that it uses a damping parameter in the iterative process. Takane's algorithm does not always converge monotonically. Based on the

  3. Seamless Merging of Hypertext and Algorithm Animation

    Science.gov (United States)

    Karavirta, Ville

    2009-01-01

    Online learning material that students use by themselves is one of the typical usages of algorithm animation (AA). Thus, the integration of algorithm animations into hypertext is seen as an important topic today to promote the usage of algorithm animation in teaching. This article presents an algorithm animation viewer implemented purely using…

  4. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  5. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  6. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  7. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  8. Using Alternative Multiplication Algorithms to "Offload" Cognition

    Science.gov (United States)

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  9. Gossip algorithms in quantum networks

    Science.gov (United States)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  10. Industrial Applications of Evolutionary Algorithms

    CERN Document Server

    Sanchez, Ernesto; Tonda, Alberto

    2012-01-01

    This book is intended as a reference both for experienced users of evolutionary algorithms and for researchers that are beginning to approach these fascinating optimization techniques. Experienced users will find interesting details of real-world problems, and advice on solving issues related to fitness computation, modeling and setting appropriate parameters to reach optimal solutions. Beginners will find a thorough introduction to evolutionary computation, and a complete presentation of all evolutionary algorithms exploited to solve different problems. The book could fill the gap between the

  11. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  12. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  13. Hill climbing algorithms and trivium

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Matusiewicz, Krystian

    2011-01-01

    This paper proposes a new method to solve certain classes of systems of multivariate equations over the binary field and its cryptanalytical applications. We show how heuristic optimization methods such as hill climbing algorithms can be relevant to solving systems of multivariate equations....... A characteristic of equation systems that may be efficiently solvable by the means of such algorithms is provided. As an example, we investigate equation systems induced by the problem of recovering the internal state of the stream cipher Trivium. We propose an improved variant of the simulated annealing method...

  14. CATEGORIES OF COMPUTER SYSTEMS ALGORITHMS

    Directory of Open Access Journals (Sweden)

    A. V. Poltavskiy

    2015-01-01

    Full Text Available Philosophy as a frame of reference on world around and as the first science is a fundamental basis, "roots" (R. Descartes for all branches of the scientific knowledge accumulated and applied in all fields of activity of a human being person. The theory of algorithms as one of the fundamental sections of mathematics, is also based on researches of the gnoseology conducting cognition of a true picture of the world of the buman being. From gnoseology and ontology positions as fundamental sections of philosophy modern innovative projects are inconceivable without development of programs,and algorithms.

  15. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  16. Binar Sort: A Linear Generalized Sorting Algorithm

    OpenAIRE

    Gilreath, William F.

    2008-01-01

    Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial perm...

  17. Some software algorithms for microprocessor ratemeters

    International Nuclear Information System (INIS)

    Savic, Z.

    1991-01-01

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.)

  18. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  19. Some software algorithms for microprocessor ratemeters

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Z. (Military Technical Inst., Belgrade (Yugoslavia))

    1991-03-15

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.).

  20. Rendezvous maneuvers using Genetic Algorithm

    International Nuclear Information System (INIS)

    Dos Santos, Denílson Paulo Souza; De Almeida Prado, Antônio F Bertachini; Teodoro, Anderson Rodrigo Barretto

    2013-01-01

    The present paper has the goal of studying orbital maneuvers of Rendezvous, that is an orbital transfer where a spacecraft has to change its orbit to meet with another spacecraft that is travelling in another orbit. This transfer will be accomplished by using a multi-impulsive control. A genetic algorithm is used to find the transfers that have minimum fuel consumption