WorldWideScience

Sample records for point triangulation methods

  1. Investigation of point triangulation methods for optimality and performance in Structure from Motion systems

    DEFF Research Database (Denmark)

    Structure from Motion (SFM) systems are composed of cameras and structure in the form of 3D points and other features. It is most often that the structure components outnumber the cameras by a great margin. It is not uncommon to have a configuration with 3 cameras observing more than 500 3D points...... an overview of existing triangulation methods with emphasis on performance versus optimality, and will suggest a fast triangulation algorithm based on linear constraints. The structure and camera motion estimation in a SFM system is based on the minimization of some norm of the reprojection error between...

  2. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  3. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  4. A REST Service for Triangulation of Point Sets Using Oriented Matroids

    Directory of Open Access Journals (Sweden)

    José Antonio Valero Medina

    2014-05-01

    Full Text Available This paper describes the implementation of a prototype REST service for triangulation of point sets collected by mobile GPS receivers. The first objective of this paper is to test functionalities of an application, which exploits mobile devices’ capabilities to get data associated with their spatial location. A triangulation of a set of points provides a mechanism through which it is possible to produce an accurate representation of spatial data. Such triangulation may be used for representing surfaces by Triangulated Irregular Networks (TINs, and for decomposing complex two-dimensional spatial objects into simpler geometries. The second objective of this paper is to promote the use of oriented matroids for finding alternative solutions to spatial data processing and analysis tasks. This study focused on the particular case of the calculation of triangulations based on oriented matroids. The prototype described in this paper used a wrapper to integrate and expose several tools previously implemented in C++.

  5. Mixed Methods, Triangulation, and Causal Explanation

    Science.gov (United States)

    Howe, Kenneth R.

    2012-01-01

    This article distinguishes a disjunctive conception of mixed methods/triangulation, which brings different methods to bear on different questions, from a conjunctive conception, which brings different methods to bear on the same question. It then examines a more inclusive, holistic conception of mixed methods/triangulation that accommodates…

  6. Triangulation, Respondent Validation, and Democratic Participation in Mixed Methods Research

    Science.gov (United States)

    Torrance, Harry

    2012-01-01

    Over the past 10 years or so the "Field" of "Mixed Methods Research" (MMR) has increasingly been exerting itself as something separate, novel, and significant, with some advocates claiming paradigmatic status. Triangulation is an important component of mixed methods designs. Triangulation has its origins in attempts to validate research findings…

  7. Incremental Reconstruction of Urban Environments by Edge-Points Delaunay Triangulation

    OpenAIRE

    Romanoni, Andrea; Matteucci, Matteo

    2016-01-01

    Urban reconstruction from a video captured by a surveying vehicle constitutes a core module of automated mapping. When computational power represents a limited resource and, a detailed map is not the primary goal, the reconstruction can be performed incrementally, from a monocular video, carving a 3D Delaunay triangulation of sparse points; this allows online incremental mapping for tasks such as traversability analysis or obstacle avoidance. To exploit the sharp edges of urban landscape, we ...

  8. RECONSTRUCTION, QUANTIFICATION, AND VISUALIZATION OF FOREST CANOPY BASED ON 3D TRIANGULATIONS OF AIRBORNE LASER SCANNING POINT DATA

    Directory of Open Access Journals (Sweden)

    J. Vauhkonen

    2015-03-01

    Full Text Available Reconstruction of three-dimensional (3D forest canopy is described and quantified using airborne laser scanning (ALS data with densities of 0.6–0.8 points m-2 and field measurements aggregated at resolutions of 400–900 m2. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i to optimize the degree of filtration with respect to the field measurements, and (ii to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R2 with the stem volume considered, both alone (R2=0.65 and together with other predictors (R2=0.78. When derived by analyzing the topological persistence of the point data and without any field input, the R2 were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.

  9. Aerial Triangulation Close-range Images with Dual Quaternion

    Directory of Open Access Journals (Sweden)

    SHENG Qinghong

    2015-05-01

    Full Text Available A new method for the aerial triangulation of close-range images based on dual quaternion is presented. Using dual quaternion to represent the spiral screw motion of the beam in the space, the real part of dual quaternion represents the angular elements of all the beams in the close-range area networks, the real part and the dual part of dual quaternion represents the line elements corporately. Finally, an aerial triangulation adjustment model based on dual quaternion is established, and the elements of interior orientation and exterior orientation and the object coordinates of the ground points are calculated. Real images and large attitude angle simulated images are selected to run the experiments of aerial triangulation. The experimental results show that the new method for the aerial triangulation of close-range images based on dual quaternion can obtain higher accuracy.

  10. Simulating triangulations. Graphs, manifolds and (quantum) spacetime

    International Nuclear Information System (INIS)

    Krueger, Benedikt

    2016-01-01

    Triangulations, which can intuitively be described as a tessellation of space into simplicial building blocks, are structures that arise in various different branches of physics: They can be used for describing complicated and curved objects in a discretized way, e.g., in foams, gels or porous media, or for discretizing curved boundaries for fluid simulations or dissipative systems. Interpreting triangulations as (maximal planar) graphs makes it possible to use them in graph theory or statistical physics, e.g., as small-world networks, as networks of spins or in biological physics as actin networks. Since one can find an analogue of the Einstein-Hilbert action on triangulations, they can even be used for formulating theories of quantum gravity. Triangulations have also important applications in mathematics, especially in discrete topology. Despite their wide occurrence in different branches of physics and mathematics, there are still some fundamental open questions about triangulations in general. It is a prior unknown how many triangulations there are for a given set of points or a given manifold, or even whether there are exponentially many triangulations or more, a question that relates to a well-defined behavior of certain quantum geometry models. Another major unknown question is whether elementary steps transforming triangulations into each other, which are used in computer simulations, are ergodic. Using triangulations as model for spacetime, it is not clear whether there is a meaningful continuum limit that can be identified with the usual and well-tested theory of general relativity. Within this thesis some of these fundamental questions about triangulations are answered by the use of Markov chain Monte Carlo simulations, which are a probabilistic method for calculating statistical expectation values, or more generally a tool for calculating high-dimensional integrals. Additionally, some details about the Wang-Landau algorithm, which is the primary used

  11. Simulating triangulations. Graphs, manifolds and (quantum) spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Krueger, Benedikt

    2016-07-01

    Triangulations, which can intuitively be described as a tessellation of space into simplicial building blocks, are structures that arise in various different branches of physics: They can be used for describing complicated and curved objects in a discretized way, e.g., in foams, gels or porous media, or for discretizing curved boundaries for fluid simulations or dissipative systems. Interpreting triangulations as (maximal planar) graphs makes it possible to use them in graph theory or statistical physics, e.g., as small-world networks, as networks of spins or in biological physics as actin networks. Since one can find an analogue of the Einstein-Hilbert action on triangulations, they can even be used for formulating theories of quantum gravity. Triangulations have also important applications in mathematics, especially in discrete topology. Despite their wide occurrence in different branches of physics and mathematics, there are still some fundamental open questions about triangulations in general. It is a prior unknown how many triangulations there are for a given set of points or a given manifold, or even whether there are exponentially many triangulations or more, a question that relates to a well-defined behavior of certain quantum geometry models. Another major unknown question is whether elementary steps transforming triangulations into each other, which are used in computer simulations, are ergodic. Using triangulations as model for spacetime, it is not clear whether there is a meaningful continuum limit that can be identified with the usual and well-tested theory of general relativity. Within this thesis some of these fundamental questions about triangulations are answered by the use of Markov chain Monte Carlo simulations, which are a probabilistic method for calculating statistical expectation values, or more generally a tool for calculating high-dimensional integrals. Additionally, some details about the Wang-Landau algorithm, which is the primary used

  12. The use of triangulation in qualitative research.

    Science.gov (United States)

    Carter, Nancy; Bryant-Lukosius, Denise; DiCenso, Alba; Blythe, Jennifer; Neville, Alan J

    2014-09-01

    Triangulation refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena (Patton, 1999). Triangulation also has been viewed as a qualitative research strategy to test validity through the convergence of information from different sources. Denzin (1978) and Patton (1999) identified four types of triangulation: (a) method triangulation, (b) investigator triangulation, (c) theory triangulation, and (d) data source triangulation. The current article will present the four types of triangulation followed by a discussion of the use of focus groups (FGs) and in-depth individual (IDI) interviews as an example of data source triangulation in qualitative inquiry.

  13. Triangulation applied to Jan H. van Bemmel

    NARCIS (Netherlands)

    Hasman, A.; Bergemann, D.; McCray, A. T.; Talmon, J. L.; Zvárová, J.

    2006-01-01

    OBJECTIVE: To describe the person of Jan H. van Bemmel from different points of view. METHOD: Triangulation. RESULTS AND CONCLUSIONS: Jan H. van Bemmel successfully contributed to research and education in medical informatics. He inspired a lot of people in The Netherlands and internationally

  14. Triangulation and the importance of establishing valid methods for food safety culture evaluation.

    Science.gov (United States)

    Jespersen, Lone; Wallace, Carol A

    2017-10-01

    The research evaluates maturity of food safety culture in five multi-national food companies using method triangulation, specifically self-assessment scale, performance documents, and semi-structured interviews. Weaknesses associated with each individual method are known but there are few studies in food safety where a method triangulation approach is used for both data collection and data analysis. Significantly, this research shows that individual results taken in isolation can lead to wrong conclusions, resulting in potentially failing tactics and wasted investments. However, by applying method triangulation and reviewing results from a range of culture measurement tools it is possible to better direct investments and interventions. The findings add to the food safety culture paradigm beyond a single evaluation of food safety culture using generic culture surveys. Copyright © 2017. Published by Elsevier Ltd.

  15. Triangulation Made Easy

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P

    2009-12-23

    We describe a simple and efficient algorithm for two-view triangulation of 3D points from approximate 2D matches based on minimizing the L2 reprojection error. Our iterative algorithm improves on the one by Kanatani et al. by ensuring that in each iteration the epipolar constraint is satisfied. In the case where the two cameras are pointed in the same direction, the method provably converges to an optimal solution in exactly two iterations. For more general camera poses, two iterations are sufficient to achieve convergence to machine precision, which we exploit to devise a fast, non-iterative method. The resulting algorithm amounts to little more than solving a quadratic equation, and involves a fixed, small number of simple matrixvector operations and no conditional branches. We demonstrate that the method computes solutions that agree to very high precision with those of Hartley and Sturm's original polynomial method, though achieves higher numerical stability and 1-4 orders of magnitude greater speed.

  16. Visualization research of 3D radiation field based on Delaunay triangulation

    International Nuclear Information System (INIS)

    Xie Changji; Chen Yuqing; Li Shiting; Zhu Bo

    2011-01-01

    Based on the characteristics of the three dimensional partition, the triangulation of discrete date sets is improved by the method of point-by-point insertion. The discrete data for the radiation field by theoretical calculation or actual measurement is restructured, and the continuous distribution of the radiation field data is obtained. Finally, the 3D virtual scene of the nuclear facilities is built with the VR simulation techniques, and the visualization of the 3D radiation field is also achieved by the visualization mapping techniques. It is shown that the method combined VR and Delaunay triangulation could greatly improve the quality and efficiency of 3D radiation field visualization. (authors)

  17. Quantitative evaluation for small surface damage based on iterative difference and triangulation of 3D point cloud

    Science.gov (United States)

    Zhang, Yuyan; Guo, Quanli; Wang, Zhenchun; Yang, Degong

    2018-03-01

    This paper proposes a non-contact, non-destructive evaluation method for the surface damage of high-speed sliding electrical contact rails. The proposed method establishes a model of damage identification and calculation. A laser scanning system is built to obtain the 3D point cloud data of the rail surface. In order to extract the damage region of the rail surface, the 3D point cloud data are processed using iterative difference, nearest neighbours search and a data registration algorithm. The curvature of the point cloud data in the damage region is mapped to RGB color information, which can directly reflect the change trend of the curvature of the point cloud data in the damage region. The extracted damage region is divided into three prism elements by a method of triangulation. The volume and mass of a single element are calculated by the method of geometric segmentation. Finally, the total volume and mass of the damage region are obtained by the principle of superposition. The proposed method is applied to several typical injuries and the results are discussed. The experimental results show that the algorithm can identify damage shapes and calculate damage mass with milligram precision, which are useful for evaluating the damage in a further research stage.

  18. The use of Triangulation in Social Sciences Research : Can qualitative and quantitative methods be combined?

    Directory of Open Access Journals (Sweden)

    Ashatu Hussein

    2015-03-01

    Full Text Available This article refers to a study in Tanzania on fringe benefits or welfare via the work contract1 where we will work both quantitatively and qualitatively. My focus is on the vital issue of combining methods or methodologies. There has been mixed views on the uses of triangulation in researches. Some authors argue that triangulation is just for increasing the wider and deep understanding of the study phenomenon, while others have argued that triangulation is actually used to increase the study accuracy, in this case triangulation is one of the validity measures. Triangulation is defined as the use of multiple methods mainly qualitative and quantitative methods in studying the same phenomenon for the purpose of increasing study credibility. This implies that triangulation is the combination of two or more methodological approaches, theoretical perspectives, data sources, investigators and analysis methods to study the same phenomenon.However, using both qualitative and quantitative paradigms in the same study has resulted into debate from some researchers arguing that the two paradigms differ epistemologically and ontologically. Nevertheless, both paradigms are designed towards understanding about a particular subject area of interest and both of them have strengths and weaknesses. Thus, when combined there is a great possibility of neutralizing the flaws of one method and strengthening the benefits of the other for the better research results. Thus, to reap the benefits of two paradigms and minimizing the drawbacks of each, the combination of the two approaches have been advocated in this article. The quality of our studies on welfare to combat poverty is crucial, and especially when we want our conclusions to matter in practice.

  19. Accuracy enhancement of point triangulation probes for linear displacement measurement

    Science.gov (United States)

    Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun

    2000-03-01

    Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.

  20. I/O-Efficient Construction of Constrained Delaunay Triangulations

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected O( N B logM/B NB ) I/Os for triangulating N points in the plane, where...

  1. Triangulation positioning system network

    Directory of Open Access Journals (Sweden)

    Sfendourakis Marios

    2017-01-01

    Full Text Available This paper presents ongoing work on localization and positioning through triangulation procedure for a Fixed Sensors Network - FSN.The FSN has to work as a system.As the triangulation problem becomes high complicated in a case with large numbers of sensors and transmitters, an adequate grid topology is needed in order to tackle the detection complexity.For that reason a Network grid topology is presented and areas that are problematic and need further analysis are analyzed.The Network System in order to deal with problems of saturation and False Triangulations - FTRNs will have to find adequate methods in every sub-area of the Area Of Interest - AOI.Also, concepts like Sensor blindness and overall Network blindness, are presented. All these concepts affect the Network detection rate and its performance and ought to be considered in a way that the network overall performance won’t be degraded.Network performance should be monitored contentiously, with right algorithms and methods.It is also shown that as the number of TRNs and FTRNs is increased Detection Complexity - DC is increased.It is hoped that with further research all the characteristics of a triangulation system network for positioning will be gained and the system will be able to perform autonomously with a high detection rate.

  2. Triangulation and Mixed Methods Designs: Data Integration with New Research Technologies

    Science.gov (United States)

    Fielding, Nigel G.

    2012-01-01

    Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or "richness." This article discusses such applications in relation to new technologies for social research, looking at three…

  3. All roads lead to Rome - New search methods for the optimal triangulation problem

    Czech Academy of Sciences Publication Activity Database

    Ottosen, T. J.; Vomlel, Jiří

    2012-01-01

    Roč. 53, č. 9 (2012), s. 1350-1366 ISSN 0888-613X R&D Projects: GA MŠk 1M0572; GA ČR GEICC/08/E010; GA ČR GA201/09/1891 Grant - others:GA MŠk(CZ) 2C06019 Institutional support: RVO:67985556 Keywords : Bayesian networks * Optimal triangulation * Probabilistic inference * Cliques in a graph Subject RIV: BD - Theory of Information Impact factor: 1.729, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/vomlel-all roads lead to rome - new search methods for the optimal triangulation problem.pdf

  4. Triangulation of Qualitative Methods for the Exploration of Activity Systems in Ergonomics

    Directory of Open Access Journals (Sweden)

    Monika Hackel

    2008-08-01

    Full Text Available Research concerning ergonomic issues in interdisciplinary projects often raises several very specific questions depending on project objectives. To answer these questions the application of research methods should be thoroughly considered, regarding both the expenditure and the options within the scope of the given resources. The project AQUIMO develops an adaptable modelling tool for mechatronical engineering and creates a related qualification program. The task of social scientific research within this project is to identify requirements viewed from the perspective of the subsequent users. This formative evaluation is based on the approach of "developmental work research" as set forth by ENGESTRÖM and, thus, is a form of "action research". This paper discusses the triangulation of several qualitative methods addressing the examination of difficulties in interdisciplinary collaboration in mechatronical engineering. After a description of both background and analytic approach within the project AQUIMO, the methods are briefly described concerning their advantages and critical points. Their application within the research project AQUIMO is explained from an activity theoretical perspective. URN: urn:nbn:de:0114-fqs0803158

  5. Laser triangulation method for measuring the size of parking claw

    Science.gov (United States)

    Liu, Bo; Zhang, Ming; Pang, Ying

    2017-10-01

    With the development of science and technology and the maturity of measurement technology, the 3D profile measurement technology has been developed rapidly. Three dimensional measurement technology is widely used in mold manufacturing, industrial inspection, automatic processing and manufacturing, etc. There are many kinds of situations in scientific research and industrial production. It is necessary to transform the original mechanical parts into the 3D data model on the computer quickly and accurately. At present, many methods have been developed to measure the contour size, laser triangulation is one of the most widely used methods.

  6. TRIANGULATION OF METHODS OF CAREER EDUCATION

    Directory of Open Access Journals (Sweden)

    Marija Turnsek Mikacic

    2015-09-01

    Full Text Available This paper is an overview of the current research in the field of career education and career planning. Presented results constitute a model based on the insight into different theories and empirical studies about career planning as a building block of personal excellence. We defined credibility, transferability and reliability of the research by means of triangulation. As sources of data of triangulation we included essays of participants of education and questionnaires. Qualitative analysis represented the framework for the construction of the paradigmatic model and the formulation of the final theory. We formulated a questionnaire on the basis of our own experiences in the area of the education of individuals. The quantitative analysis, based on the results of the interviews, confirms the following three hypotheses: The individuals who elaborated a personal career plan and acted accordingly, changed their attitudes towards their careers and took control over their lives; in addition, they achieved a high level of self-esteem and self-confidence, in tandem with the perception of personal excellence, in contrast to the individuals who did not participate in career education and did not elaborate a career plan. We used the tools of NLP (neurolinguistic programming as an additional technique at learning.

  7. Recent development of micro-triangulation for magnet fiducialisation

    CERN Document Server

    Vlachakis, Vasileios; Mainaud Durand, Helene; CERN. Geneva. ATS Department

    2016-01-01

    The micro-triangulation method is proposed as an alternative for magnet fiducialisation. The main objective is to measure horizontal and vertical angles to fiducial points and stretched wires, utilising theodolites equipped with cameras. This study aims to develop various methods, algorithms and software tools to enable the data acquisition and processing. In this paper, we present the first test measurement as an attempt to demonstrate the feasibility of the method and to evaluate the accuracy. The preliminary results are very promising, with accuracy always better than 20 μm for the wire position, and of about40 μm/m for the wire orientation, compared with a coordinate measuring machine.

  8. Qualitative to quantitative: linked trajectory of method triangulation in a study on HIV/AIDS in Goa, India.

    Science.gov (United States)

    Bailey, Ajay; Hutter, Inge

    2008-10-01

    With 3.1 million people estimated to be living with HIV/AIDS in India and 39.5 million people globally, the epidemic has posed academics the challenge of identifying behaviours and their underlying beliefs in the effort to reduce the risk of HIV transmission. The Health Belief Model (HBM) is frequently used to identify risk behaviours and adherence behaviour in the field of HIV/AIDS. Risk behaviour studies that apply HBM have been largely quantitative and use of qualitative methodology is rare. The marriage of qualitative and quantitative methods has never been easy. The challenge is in triangulating the methods. Method triangulation has been largely used to combine insights from the qualitative and quantitative methods but not to link both the methods. In this paper we suggest a linked trajectory of method triangulation (LTMT). The linked trajectory aims to first gather individual level information through in-depth interviews and then to present the information as vignettes in focus group discussions. We thus validate information obtained from in-depth interviews and gather emic concepts that arise from the interaction. We thus capture both the interpretation and the interaction angles of the qualitative method. Further, using the qualitative information gained, a survey is designed. In doing so, the survey questions are grounded and contextualized. We employed this linked trajectory of method triangulation in a study on the risk assessment of HIV/AIDS among migrant and mobile men. Fieldwork was carried out in Goa, India. Data come from two waves of studies, first an explorative qualitative study (2003), second a larger study (2004-2005), including in-depth interviews (25), focus group discussions (21) and a survey (n=1259). By employing the qualitative to quantitative LTMT we can not only contextualize the existing concepts of the HBM, but also validate new concepts and identify new risk groups.

  9. The Application of a Multiphase Triangulation Approach to Mixed Methods: The Research of an Aspiring School Principal Development Program

    Science.gov (United States)

    Youngs, Howard; Piggot-Irvine, Eileen

    2012-01-01

    Mixed methods research has emerged as a credible alternative to unitary research approaches. The authors show how a combination of a triangulation convergence model with a triangulation multilevel model was used to research an aspiring school principal development pilot program. The multilevel model is used to show the national and regional levels…

  10. A Sweepline Algorithm for Generalized Delaunay Triangulations

    DEFF Research Database (Denmark)

    Skyum, Sven

    We give a deterministic O(n log n) sweepline algorithm to construct the generalized Voronoi diagram for n points in the plane or rather its dual the generalized Delaunay triangulation. The algorithm uses no transformations and it is developed solely from the sweepline paradigm together...

  11. Spectral triangulation: a 3D method for locating single-walled carbon nanotubes in vivo

    Science.gov (United States)

    Lin, Ching-Wei; Bachilo, Sergei M.; Vu, Michael; Beckingham, Kathleen M.; Bruce Weisman, R.

    2016-05-01

    Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and imaging of cancer tumours, when linked to selective targeting agents such as antibodies. However, such applications face the challenge of sensitively detecting and localizing the source of SWIR emission from inside tissues. A new method, called spectral triangulation, is presented for three dimensional (3D) localization using sparse optical measurements made at the specimen surface. Structurally unsorted SWCNT samples emitting over a range of wavelengths are excited inside tissue phantoms by an LED matrix. The resulting SWIR emission is sampled at points on the surface by a scanning fibre optic probe leading to an InGaAs spectrometer or a spectrally filtered InGaAs avalanche photodiode detector. Because of water absorption, attenuation of the SWCNT fluorescence in tissues is strongly wavelength-dependent. We therefore gauge the SWCNT-probe distance by analysing differential changes in the measured SWCNT emission spectra. SWCNT fluorescence can be clearly detected through at least 20 mm of tissue phantom, and the 3D locations of embedded SWCNT test samples are found with sub-millimeter accuracy at depths up to 10 mm. Our method can also distinguish and locate two embedded SWCNT sources at distinct positions.Nanomaterials with luminescence in the short-wave infrared (SWIR) region are of special interest for biological research and medical diagnostics because of favorable tissue transparency and low autofluorescence backgrounds in that region. Single-walled carbon nanotubes (SWCNTs) show well-known sharp SWIR spectral signatures and therefore have potential for noninvasive detection and

  12. Constructing Delaunay triangulations along space-filling curves

    NARCIS (Netherlands)

    Buchin, K.; Fiat, A.; Sanders, P.

    2009-01-01

    Incremental construction con BRIO using a space-filling curve order for insertion is a popular algorithm for constructing Delaunay triangulations. So far, it has only been analyzed for the case that a worst-case optimal point location data structure is used which is often avoided in implementations.

  13. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    Science.gov (United States)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  14. Triangulated categories (AM-148)

    CERN Document Server

    Neeman, Amnon

    2014-01-01

    The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This

  15. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  16. Robotic tool positioning process using a multi-line off-axis laser triangulation sensor

    Science.gov (United States)

    Pinto, T. C.; Matos, G.

    2018-03-01

    Proper positioning of a friction stir welding head for pin insertion, driven by a closed chain robot, is important to ensure quality repair of cracks. A multi-line off-axis laser triangulation sensor was designed to be integrated to the robot, allowing relative measurements of the surface to be repaired. This work describes the sensor characteristics, its evaluation and the measurement process for tool positioning to a surface point of interest. The developed process uses a point of interest image and a measured point cloud to define the translation and rotation for tool positioning. Sensor evaluation and tests are described. Keywords: laser triangulation, 3D measurement, tool positioning, robotics.

  17. Discrepancies between qualitative and quantitative evaluation of randomised controlled trial results: achieving clarity through mixed methods triangulation.

    Science.gov (United States)

    Tonkin-Crine, Sarah; Anthierens, Sibyl; Hood, Kerenza; Yardley, Lucy; Cals, Jochen W L; Francis, Nick A; Coenen, Samuel; van der Velden, Alike W; Godycki-Cwirko, Maciek; Llor, Carl; Butler, Chris C; Verheij, Theo J M; Goossens, Herman; Little, Paul

    2016-05-12

    Mixed methods are commonly used in health services research; however, data are not often integrated to explore complementarity of findings. A triangulation protocol is one approach to integrating such data. A retrospective triangulation protocol was carried out on mixed methods data collected as part of a process evaluation of a trial. The multi-country randomised controlled trial found that a web-based training in communication skills (including use of a patient booklet) and the use of a C-reactive protein (CRP) point-of-care test decreased antibiotic prescribing by general practitioners (GPs) for acute cough. The process evaluation investigated GPs' and patients' experiences of taking part in the trial. Three analysts independently compared findings across four data sets: qualitative data collected view semi-structured interviews with (1) 62 patients and (2) 66 GPs and quantitative data collected via questionnaires with (3) 2886 patients and (4) 346 GPs. Pairwise comparisons were made between data sets and were categorised as agreement, partial agreement, dissonance or silence. Three instances of dissonance occurred in 39 independent findings. GPs and patients reported different views on the use of a CRP test. GPs felt that the test was useful in convincing patients to accept a no-antibiotic decision, but patient data suggested that this was unnecessary if a full explanation was given. Whilst qualitative data indicated all patients were generally satisfied with their consultation, quantitative data indicated highest levels of satisfaction for those receiving a detailed explanation from their GP with a booklet giving advice on self-care. Both qualitative and quantitative data sets indicated higher patient enablement for those in the communication groups who had received a booklet. Use of CRP tests does not appear to engage patients or influence illness perceptions and its effect is more centred on changing clinician behaviour. Communication skills and the patient

  18. Feminist Approaches to Triangulation: Uncovering Subjugated Knowledge and Fostering Social Change in Mixed Methods Research

    Science.gov (United States)

    Hesse-Biber, Sharlene

    2012-01-01

    This article explores the deployment of triangulation in the service of uncovering subjugated knowledge and promoting social change for women and other oppressed groups. Feminist approaches to mixed methods praxis create a tight link between the research problem and the research design. An analysis of selected case studies of feminist praxis…

  19. Generation of triangulated random surfaces by the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analog of the Polyakov string is considered. An algorithm is proposed which enables one to study the model by the Monte Carlo method in the grand canonical ensemble. Preliminary results on the determination of the critical index γ are presented

  20. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  1. Tradeoffs in Design Research: Development Oriented Triangulation

    NARCIS (Netherlands)

    Koen van Turnhout; Sabine Craenmehr; Robert Holwerda; Mark Menijn; Jan-Pieter Zwart; René Bakker

    2013-01-01

    The Development Oriented Triangulation (DOT) framework in this paper can spark and focus the debate about mixed-method approaches in HCI. The framework can be used to classify HCI methods, create mixed-method designs, and to align research activities in multidisciplinary projects. The framework is

  2. Quantum triangulations. Moduli spaces, strings, and quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Carfora, Mauro; Marzouli, Annalisa [Univ. degli Studi di Pavia (Italy). Dipt. Fisica Nucleare e Teorica; Istituto Nazionale di Fisica Nucleare e Teorica, Pavia (Italy)

    2012-07-01

    Research on polyhedral manifolds often points to unexpected connections between very distinct aspects of Mathematics and Physics. In particular triangulated manifolds play quite a distinguished role in such settings as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, and critical phenomena. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is rather often a consequence of an underlying structure which naturally calls into play non-trivial aspects of representation theory, of complex analysis and topology in a way which makes manifest the basic geometric structures of the physical interactions involved. Yet, in most of the existing literature, triangulated manifolds are still merely viewed as a convenient discretization of a given physical theory to make it more amenable for numerical treatment. The motivation for these lectures notes is thus to provide an approachable introduction to this topic, emphasizing the conceptual aspects, and probing, through a set of cases studies, the connection between triangulated manifolds and quantum physics to the deepest. This volume addresses applied mathematicians and theoretical physicists working in the field of quantum geometry and its applications. (orig.)

  3. Determination of Shift/Bias in Digital Aerial Triangulation of UAV Imagery Sequences

    Science.gov (United States)

    Wierzbicki, Damian

    2017-12-01

    Currently UAV Photogrammetry is characterized a largely automated and efficient data processing. Depicting from the low altitude more often gains on the meaning in the uses of applications as: cities mapping, corridor mapping, road and pipeline inspections or mapping of large areas e.g. forests. Additionally, high-resolution video image (HD and bigger) is more often use for depicting from the low altitude from one side it lets deliver a lot of details and characteristics of ground surfaces features, and from the other side is presenting new challenges in the data processing. Therefore, determination of elements of external orientation plays a substantial role the detail of Digital Terrain Models and artefact-free ortophoto generation. Parallel a research on the quality of acquired images from UAV and above the quality of products e.g. orthophotos are conducted. Despite so fast development UAV photogrammetry still exists the necessity of accomplishment Automatic Aerial Triangulation (AAT) on the basis of the observations GPS/INS and via ground control points. During low altitude photogrammetric flight, the approximate elements of external orientation registered by UAV are burdened with the influence of some shift/bias errors. In this article, methods of determination shift/bias error are presented. In the process of the digital aerial triangulation two solutions are applied. In the first method shift/bias error was determined together with the drift/bias error, elements of external orientation and coordinates of ground control points. In the second method shift/bias error was determined together with the elements of external orientation, coordinates of ground control points and drift/bias error equals 0. When two methods were compared the difference for shift/bias error is more than ±0.01 m for all terrain coordinates XYZ.

  4. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum

  5. Gaussian vector fields on triangulated surfaces

    DEFF Research Database (Denmark)

    Ipsen, John H

    2016-01-01

    proven to be very useful to resolve the complex interplay between in-plane ordering of membranes and membrane conformations. In the present work we have developed a procedure for realistic representations of Gaussian models with in-plane vector degrees of freedoms on a triangulated surface. The method...

  6. Modification of the laser triangulation method for measuring the thickness of optical layers

    Science.gov (United States)

    Khramov, V. N.; Adamov, A. A.

    2018-04-01

    The problem of determining the thickness of thin films by the method of laser triangulation is considered. An expression is derived for the film thickness and the distance between the focused beams on the photo detector. The possibility of applying the chosen method for measuring thickness is in the range [0.1; 1] mm. We could resolve 2 individual light marks for a minimum film thickness of 0.23 mm. We resolved with the help of computer processing of photos with a resolution of 0.10 mm. The obtained results can be used in ophthalmology for express diagnostics during surgical operations on the corneal layer.

  7. UAV PHOTOGRAMMETRY: BLOCK TRIANGULATION COMPARISONS

    Directory of Open Access Journals (Sweden)

    R. Gini

    2013-08-01

    Full Text Available UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord", useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey, allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma, Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.

  8. A Delaunay Triangulation Approach For Segmenting Clumps Of Nuclei

    International Nuclear Information System (INIS)

    Wen, Quan; Chang, Hang; Parvin, Bahram

    2009-01-01

    Cell-based fluorescence imaging assays have the potential to generate massive amount of data, which requires detailed quantitative analysis. Often, as a result of fixation, labeled nuclei overlap and create a clump of cells. However, it is important to quantify phenotypic read out on a cell-by-cell basis. In this paper, we propose a novel method for decomposing clumps of nuclei using high-level geometric constraints that are derived from low-level features of maximum curvature computed along the contour of each clump. Points of maximum curvature are used as vertices for Delaunay triangulation (DT), which provides a set of edge hypotheses for decomposing a clump of nuclei. Each hypothesis is subsequently tested against a constraint satisfaction network for a near optimum decomposition. The proposed method is compared with other traditional techniques such as the watershed method with/without markers. The experimental results show that our approach can overcome the deficiencies of the traditional methods and is very effective in separating severely touching nuclei.

  9. Triangulating' AMPATH: Demonstration of a multi-perspective ...

    African Journals Online (AJOL)

    For strategic planning, the Kenyan HIV/AIDS programme AMPATH (Academic Model Providing Access to Healthcare) sought to evaluate its performance in 2006. The method used for this evaluation was termed 'triangulation,' because it used information from three different sources – patients, communities, and programme ...

  10. Generation of triangulated random surfaces by means of the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analogue of the Polyakov string is considered in the work. An algorithm is proposed which enables one to study the model by means of the Monte Carlo method in the grand canonical ensemble. Preliminary results are presented on the evaluation of the critical index γ

  11. Stereo-tomography in triangulated models

    Science.gov (United States)

    Yang, Kai; Shao, Wei-Dong; Xing, Feng-yuan; Xiong, Kai

    2018-04-01

    Stereo-tomography is a distinctive tomographic method. It is capable of estimating the scatterer position, the local dip of scatterer and the background velocity simultaneously. Building a geologically consistent velocity model is always appealing for applied and earthquake seismologists. Differing from the previous work to incorporate various regularization techniques into the cost function of stereo-tomography, we think extending stereo-tomography to the triangulated model will be the most straightforward way to achieve this goal. In this paper, we provided all the Fréchet derivatives of stereo-tomographic data components with respect to model components for slowness-squared triangulated model (or sloth model) in 2D Cartesian coordinate based on the ray perturbation theory for interfaces. A sloth model representation means a sparser model representation when compared with conventional B-spline model representation. A sparser model representation leads to a smaller scale of stereo-tomographic (Fréchet) matrix, a higher-accuracy solution when solving linear equations, a faster convergence rate and a lower requirement for quantity of data space. Moreover, a quantitative representation of interface strengthens the relationships among different model components, which makes the cross regularizations among these model components, such as node coordinates, scatterer coordinates and scattering angles, etc., more straightforward and easier to be implemented. The sensitivity analysis, the model resolution matrix analysis and a series of synthetic data examples demonstrate the correctness of the Fréchet derivatives, the applicability of the regularization terms and the robustness of the stereo-tomography in triangulated model. It provides a solid theoretical foundation for the real applications in the future.

  12. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    Science.gov (United States)

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  13. Hamiltonian Cycles on Random Eulerian Triangulations

    DEFF Research Database (Denmark)

    Guitter, E.; Kristjansen, C.; Nielsen, Jakob Langgaard

    1998-01-01

    . Considering the case n -> 0, this implies that the system of random Eulerian triangulations equipped with Hamiltonian cycles describes a c=-1 matter field coupled to 2D quantum gravity as opposed to the system of usual random triangulations equipped with Hamiltonian cycles which has c=-2. Hence, in this case...

  14. Putting a cap on causality violations in causal dynamical triangulations

    International Nuclear Information System (INIS)

    Ambjoern, Jan; Loll, Renate; Westra, Willem; Zohren, Stefan

    2007-01-01

    The formalism of causal dynamical triangulations (CDT) provides us with a non-perturbatively defined model of quantum gravity, where the sum over histories includes only causal space-time histories. Path integrals of CDT and their continuum limits have been studied in two, three and four dimensions. Here we investigate a generalization of the two-dimensional CDT model, where the causality constraint is partially lifted by introducing branching points with a weight g s , and demonstrate that the system can be solved analytically in the genus-zero sector. The solution is analytic in a neighborhood around weight g s = 0 and cannot be analytically continued to g s = ∞, where the branching is entirely geometric and where one would formally recover standard Euclidean two-dimensional quantum gravity defined via dynamical triangulations or Liouville theory

  15. A study on the effect of different image centres on stereo triangulation accuracy

    CSIR Research Space (South Africa)

    De Villiers, J

    2015-11-01

    Full Text Available This paper evaluates the effect of mixing the distortion centre, principal point and arithmetic image centre on the distortion correction, focal length determination and resulting real-world stereo vision triangulation. A robotic arm is used...

  16. Marginal elasticity of periodic triangulated origami

    Science.gov (United States)

    Chen, Bryan; Sussman, Dan; Lubensky, Tom; Santangelo, Chris

    Origami, the classical art of folding paper, has inspired much recent work on assembling complex 3D structures from planar sheets. Origami, and more generally hinged structures with rigid panels, where all faces are triangles have special properties due to having a bulk balance of mechanical degrees of freedom and constraints. We study two families of periodic triangulated origami structures, one based on the Miura ori and one based on a kagome-like pattern due to Ron Resch. We point out the consequences of the balance of degrees of freedom and constraints for these ''metamaterial plates'' and show how the elasticity can be tuned by changing the unit cell geometry.

  17. Internet information triangulation: Design theory and prototype evaluation

    NARCIS (Netherlands)

    Wijnhoven, Alphonsus B.J.M.; Brinkhuis, Michel

    2014-01-01

    Many discussions exist regarding the credibility of information on the Internet. Similar discussions happen on the interpretation of social scientific research data, for which information triangulation has been proposed as a useful method. In this article, we explore a design theory—consisting of a

  18. Potentiation of E-4031-induced torsade de pointes by HMR1556 or ATX-II is not predicted by action potential short-term variability or triangulation.

    Science.gov (United States)

    Michael, G; Dempster, J; Kane, K A; Coker, S J

    2007-12-01

    Torsade de pointes (TdP) can be induced by a reduction in cardiac repolarizing capacity. The aim of this study was to assess whether IKs blockade or enhancement of INa could potentiate TdP induced by IKr blockade and to investigate whether short-term variability (STV) or triangulation of action potentials preceded TdP. Experiments were performed in open-chest, pentobarbital-anaesthetized, alpha 1-adrenoceptor-stimulated, male New Zealand White rabbits, which received three consecutive i.v. infusions of either the IKr blocker E-4031 (1, 3 and 10 nmol kg(-1) min(-1)), the IKs blocker HMR1556 (25, 75 and 250 nmol kg(-1) min(-1)) or E-4031 and HMR1556 combined. In a second study rabbits received either the same doses of E-4031, the INa enhancer, ATX-II (0.4, 1.2 and 4.0 nmol kg(-1)) or both of these drugs. ECGs and epicardial monophasic action potentials were recorded. HMR1556 alone did not cause TdP but increased E-4031-induced TdP from 25 to 80%. ATX-II alone caused TdP in 38% of rabbits, as did E-4031; 75% of rabbits receiving both drugs had TdP. QT intervals were prolonged by all drugs but the extent of QT prolongation was not related to the occurrence of TdP. No changes in STV were detected and triangulation was only increased after TdP occurred. Giving modulators of ion channels in combination substantially increased TdP but, in this model, neither STV nor triangulation of action potentials could predict TdP.

  19. A pedagogic JavaScript program for point location strategies

    KAUST Repository

    de Castro, Pedro; Devillers, Olivier

    2011-01-01

    Point location in triangulations is a classical problem in computational geometry. And walking in a triangulation is often used as the starting point for several nice point location strategies. We present a pedagogic JavaScript program demonstrating some of these strategies, which is available at: www-sop.inria.fr/geometrica/demo/point location strategies/.

  20. Grey signal processing and data reconstruction in the non-diffracting beam triangulation measurement system

    Science.gov (United States)

    Meng, Hao; Wang, Zhongyu; Fu, Jihua

    2008-12-01

    The non-diffracting beam triangulation measurement system possesses the advantages of longer measurement range, higher theoretical measurement accuracy and higher resolution over the traditional laser triangulation measurement system. Unfortunately the measurement accuracy of the system is greatly degraded due to the speckle noise, the CCD photoelectric noise and the background light noise in practical applications. Hence, some effective signal processing methods must be applied to improve the measurement accuracy. In this paper a novel effective method for removing the noises in the non-diffracting beam triangulation measurement system is proposed. In the method the grey system theory is used to process and reconstruct the measurement signal. Through implementing the grey dynamic filtering based on the dynamic GM(1,1), the noises can be effectively removed from the primary measurement data and the measurement accuracy of the system can be improved as a result.

  1. The ising model on the dynamical triangulated random surface

    International Nuclear Information System (INIS)

    Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.

    1990-01-01

    The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions

  2. Triangulation in rewriting

    NARCIS (Netherlands)

    Oostrom, V. van; Zantema, Hans

    2012-01-01

    We introduce a process, dubbed triangulation, turning any rewrite relation into a confluent one. It is more direct than usual completion, in the sense that objects connected by a peak are directly oriented rather than their normal forms. We investigate conditions under which this process preserves

  3. Triangulation-based 3D surveying borescope

    Science.gov (United States)

    Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.

    2016-04-01

    In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.

  4. A TQFT of Tuarev-Viro type on shaped triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Kashaev, Rinat [Geneva Univ. (Switzerland); Luo, Feng [Rutgers Univ., Piscataway, NJ (United States). Dept. of Mathematics; Vartanov, Grigory [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-10-15

    A shaped triangulation is a finite triangulation of an oriented pseudo three manifold where each tetrahedron carries dihedral angles of an ideal hyberbolic tetrahedron. To each shaped triangulation, we associate a quantum partition function in the form of an absolutely convergent state integral which is invariant under shaped 3-2 Pachner moves and invariant with respect to shape gauge transformations generated by total dihedral angles around internal edges through the Neumann-Zagier Poisson bracket. Similarly to Turaev-Viro theory, the state variables live on edges of the triangulation but take their values on the whole real axis. The tetrahedral weight functions are composed of three hyperbolic gamma functions in a way that they enjoy a manifest tetrahedral symmetry. We conjecture that for shaped triangulations of closed 3-manifolds, our partition function is twice the absolute value squared of the partition function of Techmueller TQFT defined by Andersen and Kashaev. This is similar to the known relationship between the Turaev-Viro and the Witten-Reshetikhin-Turaev invariants of three manifolds. We also discuss interpretations of our construction in terms of three-dimensional supersymmetric field theories related to triangulated three-dimensional manifolds.

  5. A TQFT of Tuarev-Viro type on shaped triangulations

    International Nuclear Information System (INIS)

    Kashaev, Rinat; Luo, Feng

    2012-10-01

    A shaped triangulation is a finite triangulation of an oriented pseudo three manifold where each tetrahedron carries dihedral angles of an ideal hyberbolic tetrahedron. To each shaped triangulation, we associate a quantum partition function in the form of an absolutely convergent state integral which is invariant under shaped 3-2 Pachner moves and invariant with respect to shape gauge transformations generated by total dihedral angles around internal edges through the Neumann-Zagier Poisson bracket. Similarly to Turaev-Viro theory, the state variables live on edges of the triangulation but take their values on the whole real axis. The tetrahedral weight functions are composed of three hyperbolic gamma functions in a way that they enjoy a manifest tetrahedral symmetry. We conjecture that for shaped triangulations of closed 3-manifolds, our partition function is twice the absolute value squared of the partition function of Techmueller TQFT defined by Andersen and Kashaev. This is similar to the known relationship between the Turaev-Viro and the Witten-Reshetikhin-Turaev invariants of three manifolds. We also discuss interpretations of our construction in terms of three-dimensional supersymmetric field theories related to triangulated three-dimensional manifolds.

  6. Non-degenerated Ground States and Low-degenerated Excited States in the Antiferromagnetic Ising Model on Triangulations

    Science.gov (United States)

    Jiménez, Andrea

    2014-02-01

    We study the unexpected asymptotic behavior of the degeneracy of the first few energy levels in the antiferromagnetic Ising model on triangulations of closed Riemann surfaces. There are strong mathematical and physical reasons to expect that the number of ground states (i.e., degeneracy) of the antiferromagnetic Ising model on the triangulations of a fixed closed Riemann surface is exponential in the number of vertices. In the set of plane triangulations, the degeneracy equals the number of perfect matchings of the geometric duals, and thus it is exponential by a recent result of Chudnovsky and Seymour. From the physics point of view, antiferromagnetic triangulations are geometrically frustrated systems, and in such systems exponential degeneracy is predicted. We present results that contradict these predictions. We prove that for each closed Riemann surface S of positive genus, there are sequences of triangulations of S with exactly one ground state. One possible explanation of this phenomenon is that exponential degeneracy would be found in the excited states with energy close to the ground state energy. However, as our second result, we show the existence of a sequence of triangulations of a closed Riemann surface of genus 10 with exactly one ground state such that the degeneracy of each of the 1st, 2nd, 3rd and 4th excited energy levels belongs to O( n), O( n 2), O( n 3) and O( n 4), respectively.

  7. Strongly minimal triangulations of (S × S )#3 and (S S

    Indian Academy of Sciences (India)

    2011) 986–995). We show that there are exactly 12 such triangulations up to isomorphism, 10 of which are orientable. Keywords. Stacked sphere; tight neighbourly triangulation; minimal triangulation. 2000 Mathematics Subject Classification.

  8. Observation, innovation and triangulation

    DEFF Research Database (Denmark)

    Hetmar, Vibeke

    2007-01-01

    on experiences from a pilot project in three different classrooms methodological possibilities and problems are presented and discussed: 1) educational criticism, including the concepts of positions, perspectives and connoisseurship, 2) classroom observations and 3) triangulation as a methodological tool....

  9. GEOPOSITIONING PRECISION ANALYSIS OF MULTIPLE IMAGE TRIANGULATION USING LRO NAC LUNAR IMAGES

    Directory of Open Access Journals (Sweden)

    K. Di

    2016-06-01

    Full Text Available This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC Narrow Angle Camera (NAC images at the Chang’e-3(CE-3 landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  10. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  11. Looseness and Independence Number of Triangulations on Closed Surfaces

    Directory of Open Access Journals (Sweden)

    Nakamoto Atsuhiro

    2016-08-01

    Full Text Available The looseness of a triangulation G on a closed surface F2, denoted by ξ (G, is defined as the minimum number k such that for any surjection c : V (G → {1, 2, . . . , k + 3}, there is a face uvw of G with c(u, c(v and c(w all distinct. We shall bound ξ (G for triangulations G on closed surfaces by the independence number of G denoted by α(G. In particular, for a triangulation G on the sphere, we have

  12. Label triangulation

    International Nuclear Information System (INIS)

    May, R.P.

    1983-01-01

    Label Triangulation (LT) with neutrons allows the investigation of the quaternary structure of biological multicomponent complexes under native conditions. Provided that the complex can be fully separated into and reconstituted from its single - protonated and deuterated - components, small angle neutron scattering (SANS) can give selective information on shapes and pair distances of these components. Following basic geometrical rules, the spatial arrangement of the components can be reconstructed from these data. LT has so far been successfully applied to the small and large ribosomal subunits and the transcriptase of E. coli. (author)

  13. Triangulation of the monophasic action potential causes flattening of the electrocardiographic T-wave

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer Ahmed; Graff, Claus; Thomsen, Morten Bækgaard

    2012-01-01

    of the action potential under the effect of the IKr blocker sertindole and associated these changes to concurrent changes in the morphology of electrocardiographic T-waves in dogs. We show that, under the effect of sertindole, the peak changes in the morphology of action potentials occur at time points similar......It has been proposed that triangulation on the cardiac action potential manifests as a broadened, more flat and notched T-wave on the ECG but to what extent such morphology characteristics are indicative of triangulation is more unclear. In this paper, we have analyzed the morphological changes...... to those observed for the peak changes in T-wave morphology on the ECG. We further show that the association between action potential shape and ECG shape is dose-dependent and most prominent at the time corresponding to phase 3 of the action potential....

  14. Interprofessional collaboration from nurses and physicians – A triangulation of quantitative and qualitative data

    Science.gov (United States)

    Schärli, Marianne; Müller, Rita; Martin, Jacqueline S; Spichiger, Elisabeth; Spirig, Rebecca

    2017-01-01

    Background: Interprofessional collaboration between nurses and physicians is a recurrent challenge in daily clinical practice. To ameliorate the situation, quantitative or qualitative studies are conducted. However, the results of these studies have often been limited by the methods chosen. Aim: To describe the synthesis of interprofessional collaboration from the nursing perspective by triangulating quantitative and qualitative data. Method: Data triangulation was performed as a sub-project of the interprofessional Sinergia DRG Research program. Initially, quantitative and qualitative data were analyzed separately in a mixed methods design. By means of triangulation a „meta-matrix“ resulted in a four-step process. Results: The „meta-matrix“ displays all relevant quantitative and qualitative results as well as their interrelations on one page. Relevance, influencing factors as well as consequences of interprofessional collaboration for patients, relatives and systems become visible. Conclusion: For the first time, the interprofessional collaboration from the nursing perspective at five Swiss hospitals is shown in a „meta-matrix“. The consequences of insufficient collaboration between nurses and physicians are considerable. This is why it’s necessary to invest in interprofessional concepts. In the „meta-matrix“ the factors which influence the interprofessional collaboration positively or negatively are visible.

  15. A general and Robust Ray-Casting-Based Algorithm for Triangulating Surfaces at the Nanoscale

    Science.gov (United States)

    Decherchi, Sergio; Rocchia, Walter

    2013-01-01

    We present a general, robust, and efficient ray-casting-based approach to triangulating complex manifold surfaces arising in the nano-bioscience field. This feature is inserted in a more extended framework that: i) builds the molecular surface of nanometric systems according to several existing definitions, ii) can import external meshes, iii) performs accurate surface area estimation, iv) performs volume estimation, cavity detection, and conditional volume filling, and v) can color the points of a grid according to their locations with respect to the given surface. We implemented our methods in the publicly available NanoShaper software suite (www.electrostaticszone.eu). Robustness is achieved using the CGAL library and an ad hoc ray-casting technique. Our approach can deal with any manifold surface (including nonmolecular ones). Those explicitly treated here are the Connolly-Richards (SES), the Skin, and the Gaussian surfaces. Test results indicate that it is robust to rotation, scale, and atom displacement. This last aspect is evidenced by cavity detection of the highly symmetric structure of fullerene, which fails when attempted by MSMS and has problems in EDTSurf. In terms of timings, NanoShaper builds the Skin surface three times faster than the single threaded version in Lindow et al. on a 100,000 atoms protein and triangulates it at least ten times more rapidly than the Kruithof algorithm. NanoShaper was integrated with the DelPhi Poisson-Boltzmann equation solver. Its SES grid coloring outperformed the DelPhi counterpart. To test the viability of our method on large systems, we chose one of the biggest molecular structures in the Protein Data Bank, namely the 1VSZ entry, which corresponds to the human adenovirus (180,000 atoms after Hydrogen addition). We were able to triangulate the corresponding SES and Skin surfaces (6.2 and 7.0 million triangles, respectively, at a scale of 2 grids per Å) on a middle-range workstation. PMID:23577073

  16. Interferometer predictions with triangulated images

    DEFF Research Database (Denmark)

    Brinch, Christian; Dullemond, C. P.

    2014-01-01

    the synthetic model images. To get the correct values of these integrals, the model images must have the right size and resolution. Insufficient care in these choices can lead to wrong results. We present a new general-purpose scheme for the computation of visibilities of radiative transfer images. Our method...... requires a model image that is a list of intensities at arbitrarily placed positions on the image-plane. It creates a triangulated grid from these vertices, and assumes that the intensity inside each triangle of the grid is a linear function. The Fourier integral over each triangle is then evaluated...... with an analytic expression and the complex visibility of the entire image is then the sum of all triangles. The result is a robust Fourier transform that does not suffer from aliasing effects due to grid regularities. The method automatically ensures that all structure contained in the model gets reflected...

  17. Dynamical triangulated fermionic surfaces

    International Nuclear Information System (INIS)

    Ambjoern, J.; Varsted, S.

    1990-12-01

    We perform Monte Carlo simulations of randomly triangulated random surfaces which have fermionic world-sheet scalars θ i associated with each vertex i in addition to the usual bosonic world-sheet scalar χ i μ . The fermionic degrees of freedom force the internal metrics of the string to be less singular than the internal metric of the pure bosonic string. (orig.)

  18. An Efficient Method to Create Digital Terrain Models from Point Clouds Collected by Mobile LiDAR Systems

    Science.gov (United States)

    Gézero, L.; Antunes, C.

    2017-05-01

    The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.

  19. Optimizing 3D Triangulations to Recapture Sharp Edges

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2006-01-01

    In this report, a technique for optimizing 3D triangulations is proposed. The method seeks to minimize an energy defined as a sum of energy terms for each edge in a triangle mesh. The main contribution is a novel per edge energy which strikes a balance between penalizing dihedral angle yet allowing...... sharp edges. The energy is minimized using edge swapping, and this can be done either in a greedy fashion or using simulated annealing. The latter is more costly, but effectively avoids local minima. The method has been used on a number of models. Particularly good results have been obtained on digital...

  20. Triangulating and guarding realistic polygons

    NARCIS (Netherlands)

    Aloupis, G.; Bose, P.; Dujmovic, V.; Gray, C.M.; Langerman, S.; Speckmann, B.

    2014-01-01

    We propose a new model of realistic input: k-guardable objects. An object is k-guardable if its boundary can be seen by k guards. We show that k-guardable polygons generalize two previously identified classes of realistic input. Following this, we give two simple algorithms for triangulating

  1. Employee-satisfaction: A triangulation approach

    Directory of Open Access Journals (Sweden)

    P. J. Visser

    1997-06-01

    Full Text Available The research on employee-satisfaction was conducted in the manufacturing industry. The sample consisted of 543 employees. The methodology could be described as a "triangulation approach" where a combination of quantitative and qualitative measurements were utilised and the results of both types of measurement integrated in the study of the construct. The research confirms existing findings that although the measurement of dimensions such as equitable rewards, working conditions, supportive colleagues, job content, etc. yield results on the level of employee-satisfaction, a single question, namely, "How satisfied are you with your job?" compares favourably with the general index. The findings also suggest the advantage of complimenting the quantitative data with qualitative information. The conclusions confirm the value of a qualitative method in cross-cultural research in an African environment. Opsomming Die navorsing omtrent werknemerstevredenheid is uitgevoer in die vervaardigingsbedryf. Die steekproef het bestaan uit 543 werknemers. Die metode van ondersoek kan beskryf word as 'n "driekantige benadering" (triangulation approach waar daar van kwantitatiewe en kwalitatiewe meting gebruik gemaak is en die resultate geihtegreer is in die bestudering van die konstruk. Die navorsing bevestig bestaande bevindinge dat die meting van dimensies soos vergelykbare belonings, werkstoestande, ondersteunende kollegas, inhoud van werk, ens. resultate lewer rakende die vlak van werknemerstevredenheid, 'n enkel vraag, naamlik, "Hoe tevrede is jy met jou werk?" gunstig vergelyk met die algemene indeks. Die bevindinge dui ook op die voordele van 'n benadering waar die kwantitatiewe data gekomplimenteer word deur kwalitatiewe inligting soos verkry uit individuele onderhoude. Die gevolgtrekkings bevestig die waarde wat die kwalitatiewe navorsingsmetode inhou vir kruis-kulturele navorsing in 'n Afrika konteks.

  2. Triangulating and guarding realistic polygons

    NARCIS (Netherlands)

    Aloupis, G.; Bose, P.; Dujmovic, V.; Gray, C.M.; Langerman, S.; Speckmann, B.

    2008-01-01

    We propose a new model of realistic input: k-guardable objects. An object is k-guardable if its boundary can be seen by k guards in the interior of the object. In this abstract, we describe a simple algorithm for triangulating k-guardable polygons. Our algorithm, which is easily implementable, takes

  3. Measuring and Controlling Fairness of Triangulations

    KAUST Repository

    Jiang, Caigui; Gü nther, Felix; Wallner, Johannes; Pottmann, Helmut

    2016-01-01

    of fairness must take new aspects into account. We use concepts from discrete differential geometry (star-shaped Gauss images) to express fairness, and we also demonstrate how fairness can be incorporated into interactive geometric design of triangulated

  4. Path integral measure and triangulation independence in discrete gravity

    Science.gov (United States)

    Dittrich, Bianca; Steinhaus, Sebastian

    2012-02-01

    A path integral measure for gravity should also preserve the fundamental symmetry of general relativity, which is diffeomorphism symmetry. In previous work, we argued that a successful implementation of this symmetry into discrete quantum gravity models would imply discretization independence. We therefore consider the requirement of triangulation independence for the measure in (linearized) Regge calculus, which is a discrete model for quantum gravity, appearing in the semi-classical limit of spin foam models. To this end we develop a technique to evaluate the linearized Regge action associated to Pachner moves in 3D and 4D and show that it has a simple, factorized structure. We succeed in finding a local measure for 3D (linearized) Regge calculus that leads to triangulation independence. This measure factor coincides with the asymptotics of the Ponzano Regge Model, a 3D spin foam model for gravity. We furthermore discuss to which extent one can find a triangulation independent measure for 4D Regge calculus and how such a measure would be related to a quantum model for 4D flat space. To this end, we also determine the dependence of classical Regge calculus on the choice of triangulation in 3D and 4D.

  5. Triangulation in Friedmann's cosmological model

    International Nuclear Information System (INIS)

    Fagundes, H.V.

    1977-01-01

    In Friedmann's model, physical 3-space has a curvature K = constant. In the cases of greatest interest (K different from 0) triangulation for the measurement of great distances should be based on non-Euclidean geometries: Riemannian (or doubly elliptic) geometry for a closed universe and Bolyai-Lobatchevsky's (or hiperbolic) geometry for an open universe [pt

  6. The computation of fixed points and applications

    CERN Document Server

    Todd, Michael J

    1976-01-01

    Fixed-point algorithms have diverse applications in economics, optimization, game theory and the numerical solution of boundary-value problems. Since Scarf's pioneering work [56,57] on obtaining approximate fixed points of continuous mappings, a great deal of research has been done in extending the applicability and improving the efficiency of fixed-point methods. Much of this work is available only in research papers, although Scarf's book [58] gives a remarkably clear exposition of the power of fixed-point methods. However, the algorithms described by Scarf have been super~eded by the more sophisticated restart and homotopy techniques of Merrill [~8,~9] and Eaves and Saigal [1~,16]. To understand the more efficient algorithms one must become familiar with the notions of triangulation and simplicial approxi- tion, whereas Scarf stresses the concept of primitive set. These notes are intended to introduce to a wider audience the most recent fixed-point methods and their applications. Our approach is therefore ...

  7. Triangulating laser profilometer as a navigational aid for the blind: optical aspects

    Science.gov (United States)

    Farcy, R.; Denise, B.; Damaschini, R.

    1996-03-01

    We propose a navigational aid approach for the blind that relies on active optical profilometry with real-time electrotactile interfacing on the skin. Here we are concerned with the optical parts of this system. We point out the particular requirements the profilometer must meet to meet the needs of blind people. We show experimentally that an adequate compromise is possible that consists of a compact class I IR laser-diode triangulation profilometer with the following angular resolution, 20-ms acquisition time per measure of distance, 60 degrees angular scanning field.

  8. An overview of the stereo correlation and triangulation formulations used in DICe.

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  9. Solving the Einstein constraint equations on multi-block triangulations using finite element methods

    Energy Technology Data Exchange (ETDEWEB)

    Korobkin, Oleg; Pazos, Enrique [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA 70803 (United States); Aksoylu, Burak [Center for Computation and Technology, Louisiana State University, Baton Rouge, LA 70803 (United States); Holst, Michael [Department of Mathematics, University of California at San Diego 9500 Gilman Drive La Jolla, CA 92093-0112 (United States); Tiglio, Manuel [Department of Physics, University of Maryland, College Park, MD 20742 (United States)

    2009-07-21

    In order to generate initial data for nonlinear relativistic simulations, one needs to solve the Einstein constraints, which can be cast into a coupled set of nonlinear elliptic equations. Here we present an approach for solving these equations on three-dimensional multi-block domains using finite element methods. We illustrate our approach on a simple example of Brill wave initial data, with the constraints reducing to a single linear elliptic equation for the conformal factor psi. We use quadratic Lagrange elements on semi-structured simplicial meshes, obtained by triangulation of multi-block grids. In the case of uniform refinement the scheme is superconvergent at most mesh vertices, due to local symmetry of the finite element basis with respect to local spatial inversions. We show that in the superconvergent case subsequent unstructured mesh refinements do not improve the quality of our initial data. As proof of concept that this approach is feasible for generating multi-block initial data in three dimensions, after constructing the initial data we evolve them in time using a high-order finite-differencing multi-block approach and extract the gravitational waves from the numerical solution.

  10. Solving the Einstein constraint equations on multi-block triangulations using finite element methods

    International Nuclear Information System (INIS)

    Korobkin, Oleg; Pazos, Enrique; Aksoylu, Burak; Holst, Michael; Tiglio, Manuel

    2009-01-01

    In order to generate initial data for nonlinear relativistic simulations, one needs to solve the Einstein constraints, which can be cast into a coupled set of nonlinear elliptic equations. Here we present an approach for solving these equations on three-dimensional multi-block domains using finite element methods. We illustrate our approach on a simple example of Brill wave initial data, with the constraints reducing to a single linear elliptic equation for the conformal factor ψ. We use quadratic Lagrange elements on semi-structured simplicial meshes, obtained by triangulation of multi-block grids. In the case of uniform refinement the scheme is superconvergent at most mesh vertices, due to local symmetry of the finite element basis with respect to local spatial inversions. We show that in the superconvergent case subsequent unstructured mesh refinements do not improve the quality of our initial data. As proof of concept that this approach is feasible for generating multi-block initial data in three dimensions, after constructing the initial data we evolve them in time using a high-order finite-differencing multi-block approach and extract the gravitational waves from the numerical solution.

  11. A new approach for categorizing pig lying behaviour based on a Delaunay triangulation method.

    Science.gov (United States)

    Nasirahmadi, A; Hensel, O; Edwards, S A; Sturm, B

    2017-01-01

    Machine vision-based monitoring of pig lying behaviour is a fast and non-intrusive approach that could be used to improve animal health and welfare. Four pens with 22 pigs in each were selected at a commercial pig farm and monitored for 15 days using top view cameras. Three thermal categories were selected relative to room setpoint temperature. An image processing technique based on Delaunay triangulation (DT) was utilized. Different lying patterns (close, normal and far) were defined regarding the perimeter of each DT triangle and the percentages of each lying pattern were obtained in each thermal category. A method using a multilayer perceptron (MLP) neural network, to automatically classify group lying behaviour of pigs into three thermal categories, was developed and tested for its feasibility. The DT features (mean value of perimeters, maximum and minimum length of sides of triangles) were calculated as inputs for the MLP classifier. The network was trained, validated and tested and the results revealed that MLP could classify lying features into the three thermal categories with high overall accuracy (95.6%). The technique indicates that a combination of image processing, MLP classification and mathematical modelling can be used as a precise method for quantifying pig lying behaviour in welfare investigations.

  12. The chromatic class and the chromatic number of the planar conjugated triangulation

    OpenAIRE

    Malinina, Natalia

    2013-01-01

    This material is dedicated to the estimation of the chromatic number and chromatic class of the conjugated triangulation (first conversion) and also of the second conversion of the planar triangulation. Also this paper introduces some new hypotheses, which are equivalent to Four Color Problem.

  13. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  14. Drug repurposing by integrated literature mining and drug–gene–disease triangulation

    DEFF Research Database (Denmark)

    Sun, Peng; Guo, Jiong; Winnenburg, Rainer

    2017-01-01

    recent developments in computational drug repositioning and introduce the utilized data sources. Afterwards, we introduce a new data fusion model based on n-cluster editing as a novel multi-source triangulation strategy, which was further combined with semantic literature mining. Our evaluation suggests...... that utilizing drug–gene–disease triangulation coupled to sophisticated text analysis is a robust approach for identifying new drug candidates for repurposing....

  15. Measuring teamwork in primary care: Triangulation of qualitative and quantitative data.

    Science.gov (United States)

    Brown, Judith Belle; Ryan, Bridget L; Thorpe, Cathy; Markle, Emma K R; Hutchison, Brian; Glazier, Richard H

    2015-09-01

    This article describes the triangulation of qualitative dimensions, reflecting high functioning teams, with the results of standardized teamwork measures. The study used a mixed methods design using qualitative and quantitative approaches to assess teamwork in 19 Family Health Teams in Ontario, Canada. This article describes dimensions from the qualitative phase using grounded theory to explore the issues and challenges to teamwork. Two quantitative measures were used in the study, the Team Climate Inventory (TCI) and the Providing Effective Resources and Knowledge (PERK) scale. For the triangulation analysis, the mean scores of these measures were compared with the qualitatively derived ratings for the dimensions. The final sample for the qualitative component was 107 participants. The qualitative analysis identified 9 dimensions related to high team functioning such as common philosophy, scope of practice, conflict resolution, change management, leadership, and team evolution. From these dimensions, teams were categorized numerically as high, moderate, or low functioning. Three hundred seventeen team members completed the survey measures. Mean site scores for the TCI and PERK were 3.87 and 3.88, respectively (of 5). The TCI was associated will all dimensions except for team location, space allocation, and executive director leadership. The PERK was associated with all dimensions except team location. Data triangulation provided qualitative and quantitative evidence of what constitutes teamwork. Leadership was pivotal in forging a common philosophy and encouraging team collaboration. Teams used conflict resolution strategies and adapted to the changes they encountered. These dimensions advanced the team's evolution toward a high functioning team. (c) 2015 APA, all rights reserved).

  16. Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations

    Science.gov (United States)

    Cooperman, Joshua H.

    2018-05-01

    dimension as a physical observable with which to delineate renormalization group trajectories in the hope of taking a continuum limit of causal dynamical triangulations at a nontrivial ultraviolet fixed point (Ambjørn et al 2016 Phys. Rev. D 93 104032, 2014 Class. Quantum Grav. 31 165003, Cooperman 2016 Gen. Relativ. Gravit. 48 1, Cooperman 2016 arXiv:1604.01798, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151).

  17. Options for a health system researcher to choose in Meta Review (MR approaches-Meta Narrative (MN and Meta Triangulation (MT

    Directory of Open Access Journals (Sweden)

    Sanjeev Davey

    2015-01-01

    Full Text Available Two new approaches in systematic reviewing i.e. Meta-narrative review(MNR (which a health researcher can use for topics which are differently conceptualized and studied by different types of researchers for policy decisions and Meta-triangulation review(MTR (done to build theory for studying multifaceted phenomena characterized by expansive and contested research domains are ready for penetration in an arena of health system research. So critical look at which approach in Meta-review is better i.e. Meta-narrative review or Meta-triangulation review, can give new insights to a health system researcher. A systematic review on 2 key words-"meta-narrative review" and "meta-triangulation review" in health system research, were searched from key search engines, such as Pubmed, Cochrane library, Bio-med Central and Google Scholar etc till 21st March 2014 since last 20 years. Studies from both developed and developing world were included in any form and scope to draw final conclusions. However unpublished data from thesis was not included in systematic review. Meta-narrative review is a type of systematic review which can be used for a wide range of topics and questions involving making judgments and inferences in public health. On the other hand Meta-triangulation review is a three-phased, qualitative meta-analysis process which can be used to explore variations in the assumptions of alternative paradigms, gain insights into these multiple paradigms at one point of time and addresses emerging themes and the resulting theories.

  18. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    Science.gov (United States)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  19. Altitude, Orthocenter of a Triangle and Triangulation

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2016-03-01

    Full Text Available We introduce the altitudes of a triangle (the cevians perpendicular to the opposite sides. Using the generalized Ceva’s Theorem, we prove the existence and uniqueness of the orthocenter of a triangle [7]. Finally, we formalize in Mizar [1] some formulas [2] to calculate distance using triangulation.

  20. On-Line Metrology with Conoscopic Holography: Beyond Triangulation

    Directory of Open Access Journals (Sweden)

    Ignacio Álvarez

    2009-09-01

    Full Text Available On-line non-contact surface inspection with high precision is still an open problem. Laser triangulation techniques are the most common solution for this kind of systems, but there exist fundamental limitations to their applicability when high precisions, long standoffs or large apertures are needed, and when there are difficult operating conditions. Other methods are, in general, not applicable in hostile environments or inadequate for on-line measurement. In this paper we review the latest research in Conoscopic Holography, an interferometric technique that has been applied successfully in this kind of applications, ranging from submicrometric roughness measurements, to long standoff sensors for surface defect detection in steel at high temperatures.

  1. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    Science.gov (United States)

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  2. The relationships between stressful life events during childhood and differentiation of self and intergenerational triangulation in adulthood.

    Science.gov (United States)

    Peleg, Ora

    2014-12-01

    This study examined the relationships between stressful life events in childhood and differentiation of self and intergenerational triangulation in adulthood. The sample included 217 students (173 females and 44 males) from a college in northern Israel. Participants completed the Hebrew versions of Life Events Checklist (LEC), Differentiation of Self Inventory-Revised (DSI-R) and intergenerational triangulation (INTRI). The main findings were that levels of stressful life events during childhood and adolescence among both genders were positively correlated with the levels of fusion with others and intergenerational triangulation. The levels of positive life events were negatively related to levels of emotional reactivity, emotional cut-off and intergenerational triangulation. Levels of stressful life events in females were positively correlated with emotional reactivity. Intergenerational triangulation was correlated with emotional reactivity, emotional cut-off, fusion with others and I-position. Findings suggest that families that experience higher levels of stressful life events may be at risk for higher levels of intergenerational triangulation and lower levels of differentiation of self. © 2014 International Union of Psychological Science.

  3. Summations over equilaterally triangulated surfaces and the critical string measure

    International Nuclear Information System (INIS)

    Smit, D.J.; Lawrence Berkeley Lab., CA

    1992-01-01

    We propose a new approach to the summation over dynamically triangulated Riemann surfaces which does not rely on properties of the potential in a matrix model. Instead, we formulate a purely algebraic discretization of critical string path integral. This is combined with a technique which assigns to each equilateral triangulation of a two-dimensional surface a Riemann surface defined over a certain finite extension of the field of rational numbers, i.e. an arithmetic surface. Thus we establish a new formulated in which the sum over randomly triangulated surfaces defines an invariant measure on the moduli space of arithmetic surfaces. It is shown that because of this it is far from obvious that this measure for large genera approximates the measure defined by the continuum theory, i.e. Liouville theory or critical string theory. In low genus this subtlety does not exist. In the case of critical string theory we explicitly compute the volume of the moduli space of arithmetic surfaces in terms of the modular height function and show that for low genus it approximates correctly the continuum measure. We also discuss a continuum limit which bears some resemblance with a double scaling limit in matrix models. (orig.)

  4. Shared decision-making in medical encounters regarding breast cancer treatment: the contribution of methodological triangulation.

    Science.gov (United States)

    Durif-Bruckert, C; Roux, P; Morelle, M; Mignotte, H; Faure, C; Moumjid-Ferdjaoui, N

    2015-07-01

    The aim of this study on shared decision-making in the doctor-patient encounter about surgical treatment for early-stage breast cancer, conducted in a regional cancer centre in France, was to further the understanding of patient perceptions on shared decision-making. The study used methodological triangulation to collect data (both quantitative and qualitative) about patient preferences in the context of a clinical consultation in which surgeons followed a shared decision-making protocol. Data were analysed from a multi-disciplinary research perspective (social psychology and health economics). The triangulated data collection methods were questionnaires (n = 132), longitudinal interviews (n = 47) and observations of consultations (n = 26). Methodological triangulation revealed levels of divergence and complementarity between qualitative and quantitative results that suggest new perspectives on the three inter-related notions of decision-making, participation and information. Patients' responses revealed important differences between shared decision-making and participation per se. The authors note that subjecting patients to a normative behavioural model of shared decision-making in an era when paradigms of medical authority are shifting may undermine the patient's quest for what he or she believes is a more important right: a guarantee of the best care available. © 2014 John Wiley & Sons Ltd.

  5. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  6. Saddle-points of a two dimensional random lattice theory

    International Nuclear Information System (INIS)

    Pertermann, D.

    1985-07-01

    A two dimensional random lattice theory with a free massless scalar field is considered. We analyse the field theoretic generating functional for any given choice of positions of the lattice sites. Asking for saddle-points of this generating functional with respect to the positions we find the hexagonal lattice and a triangulated version of the hypercubic lattice as candidates. The investigation of the neighbourhood of a single lattice site yields triangulated rectangles and regular polygons extremizing the above generating functional on the local level. (author)

  7. Dynamically triangulated surfaces - some analytical results

    International Nuclear Information System (INIS)

    Kostov, I.K.

    1987-01-01

    We give a brief review of the analytical results concerning the model of dynamically triangulated surfaces. We will discuss the possible types of critical behaviour (depending on the dimension D of the embedding space) and the exact solutions obtained for D=0 and D=-2. The latter are important as a check of the Monte Carlo simulations applyed to study the model in more physical dimensions. They give also some general insight of its critical properties

  8. Indirect measurement of molten steel level in tundish based on laser triangulation

    Science.gov (United States)

    Su, Zhiqi; He, Qing; Xie, Zhi

    2016-03-01

    For real-time and precise measurement of molten steel level in tundish during continuous casting, slag level and slag thickness are needed. Among which, the problem of slag thickness measurement has been solved in our previous work. In this paper, a systematic solution for slag level measurement based on laser triangulation is proposed. Being different from traditional laser triangulation, several aspects for measuring precision and robustness have been done. First, laser line is adopted for multi-position measurement to overcome the deficiency of single point laser range finder caused by the uneven surface of the slag. Second, the key parameters, such as installing angle and minimum requirement of the laser power, are analyzed and determined based on the gray-body radiation theory to fulfill the rigorous requirement of measurement accuracy. Third, two kinds of severe noises in the acquired images, which are, respectively, caused by heat radiation and Electro-Magnetic Interference (EMI), are cleaned via morphological characteristic of the liquid slag and color difference between EMI and the laser signals, respectively. Fourth, as false target created by stationary slag usually disorders the measurement, valid signals of the slag are distinguished from the false ones to calculate the slag level. Then, molten steel level is obtained by the slag level minus the slag thickness. The measuring error of this solution is verified by the applications in steel plants, which is ±2.5 mm during steady casting and ±3.2 mm at the end of casting.

  9. Indirect measurement of molten steel level in tundish based on laser triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Su, Zhiqi; He, Qing, E-mail: heqing@ise.neu.edu.cn; Xie, Zhi [State Key Laboratory of Synthetical Automation for Process Industries, School of Information Science and Engineering, Northeastern University, Shenyang 110819 (China)

    2016-03-15

    For real-time and precise measurement of molten steel level in tundish during continuous casting, slag level and slag thickness are needed. Among which, the problem of slag thickness measurement has been solved in our previous work. In this paper, a systematic solution for slag level measurement based on laser triangulation is proposed. Being different from traditional laser triangulation, several aspects for measuring precision and robustness have been done. First, laser line is adopted for multi-position measurement to overcome the deficiency of single point laser range finder caused by the uneven surface of the slag. Second, the key parameters, such as installing angle and minimum requirement of the laser power, are analyzed and determined based on the gray-body radiation theory to fulfill the rigorous requirement of measurement accuracy. Third, two kinds of severe noises in the acquired images, which are, respectively, caused by heat radiation and Electro-Magnetic Interference (EMI), are cleaned via morphological characteristic of the liquid slag and color difference between EMI and the laser signals, respectively. Fourth, as false target created by stationary slag usually disorders the measurement, valid signals of the slag are distinguished from the false ones to calculate the slag level. Then, molten steel level is obtained by the slag level minus the slag thickness. The measuring error of this solution is verified by the applications in steel plants, which is ±2.5 mm during steady casting and ±3.2 mm at the end of casting.

  10. A grand-canonical ensemble of randomly triangulated surfaces

    International Nuclear Information System (INIS)

    Jurkiewicz, J.; Krzywicki, A.; Petersson, B.

    1986-01-01

    An algorithm is presented generating the grand-canonical ensemble of discrete, randomly triangulated Polyakov surfaces. The algorithm is used to calculate the susceptibility exponent, which controls the existence of the continuum limit of the considered model, for the dimensionality of the embedding space ranging from 0 to 20. (orig.)

  11. Flattening of the electrocardiographic T-wave is a sign of proarrhythmic risk and a reflection of action potential triangulation

    DEFF Research Database (Denmark)

    Bhuiyan, Tanveer Ahmed; Graff, Claus; Kanters, J.K.

    2013-01-01

    Drug-induced triangulation of the cardiac action potential is associated with increased risk of arrhythmic events. It has been suggested that triangulation causes a flattening of the electrocardiographic T-wave but the relationship between triangulation, T-wave flattening and onset of arrhythmia ...

  12. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  13. Public health triangulation: approach and application to synthesizing data to understand national and local HIV epidemics

    Directory of Open Access Journals (Sweden)

    Aberle-Grasse John

    2010-07-01

    Full Text Available Abstract Background Public health triangulation is a process for reviewing, synthesising and interpreting secondary data from multiple sources that bear on the same question to make public health decisions. It can be used to understand the dynamics of HIV transmission and to measure the impact of public health programs. While traditional intervention research and metaanalysis would be ideal sources of information for public health decision making, they are infrequently available, and often decisions can be based only on surveillance and survey data. Methods The process involves examination of a wide variety of data sources and both biological, behavioral and program data and seeks input from stakeholders to formulate meaningful public health questions. Finally and most importantly, it uses the results to inform public health decision-making. There are 12 discrete steps in the triangulation process, which included identification and assessment of key questions, identification of data sources, refining questions, gathering data and reports, assessing the quality of those data and reports, formulating hypotheses to explain trends in the data, corroborating or refining working hypotheses, drawing conclusions, communicating results and recommendations and taking public health action. Results Triangulation can be limited by the quality of the original data, the potentials for ecological fallacy and "data dredging" and reproducibility of results. Conclusions Nonetheless, we believe that public health triangulation allows for the interpretation of data sets that cannot be analyzed using meta-analysis and can be a helpful adjunct to surveillance, to formal public health intervention research and to monitoring and evaluation, which in turn lead to improved national strategic planning and resource allocation.

  14. Spectral triangulation molecular contrast optical coherence tomography with indocyanine green as the contrast agent

    OpenAIRE

    Yang, Changhuei; McGuckin, Laura E. L.; Simon, John D.; Choma, Michael A.; Applegate, Brian E.; Izatt, Joseph A.

    2004-01-01

    We report a new molecular contrast optical coherence tomography (MCOCT) implementation that profiles the contrast agent distribution in a sample by measuring the agent's spectral differential absorption. The method, spectra triangulation MCOCT, can effectively suppress contributions from spectrally dependent scatterings from the sample without a priori knowledge of the scattering properties. We demonstrate molecular imaging with this new MCOCT modality by mapping the distribution of indocyani...

  15. Random discrete Morse theory and a new library of triangulations

    DEFF Research Database (Denmark)

    Benedetti, Bruno; Lutz, Frank Hagen

    2014-01-01

    We introduce random discrete Morse theory as a computational scheme to measure the complexity of a triangulation. The idea is to try to quantify the frequency of discrete Morse matchings with few critical cells. Our measure will depend on the topology of the space, but also on how nicely the space...... is triangulated. The scheme we propose looks for optimal discrete Morse functions with an elementary random heuristic. Despite its naiveté, this approach turns out to be very successful even in the case of huge inputs. In our view, the existing libraries of examples in computational topology are “too easy......” for testing algorithms based on discrete Morse theory. We propose a new library containing more complicated (and thus more meaningful) test examples....

  16. Three-dimensional point-cloud room model in room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  17. Three-dimensional point-cloud room model for room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  18. Large N Limits in Tensor Models: Towards More Universality Classes of Colored Triangulations in Dimension d≥2

    Science.gov (United States)

    Bonzom, Valentin

    2016-07-01

    We review an approach which aims at studying discrete (pseudo-)manifolds in dimension d≥ 2 and called random tensor models. More specifically, we insist on generalizing the two-dimensional notion of p-angulations to higher dimensions. To do so, we consider families of triangulations built out of simplices with colored faces. Those simplices can be glued to form new building blocks, called bubbles which are pseudo-manifolds with boundaries. Bubbles can in turn be glued together to form triangulations. The main challenge is to classify the triangulations built from a given set of bubbles with respect to their numbers of bubbles and simplices of codimension two. While the colored triangulations which maximize the number of simplices of codimension two at fixed number of simplices are series-parallel objects called melonic triangulations, this is not always true anymore when restricting attention to colored triangulations built from specific bubbles. This opens up the possibility of new universality classes of colored triangulations. We present three existing strategies to find those universality classes. The first two strategies consist in building new bubbles from old ones for which the problem can be solved. The third strategy is a bijection between those colored triangulations and stuffed, edge-colored maps, which are some sort of hypermaps whose hyperedges are replaced with edge-colored maps. We then show that the present approach can lead to enumeration results and identification of universality classes, by working out the example of quartic tensor models. They feature a tree-like phase, a planar phase similar to two-dimensional quantum gravity and a phase transition between them which is interpreted as a proliferation of baby universes. While this work is written in the context of random tensors, it is almost exclusively of combinatorial nature and we hope it is accessible to interested readers who are not familiar with random matrices, tensors and quantum

  19. Simulations of four-dimensional simplicial quantum gravity as dynamical triangulation

    International Nuclear Information System (INIS)

    Agishtein, M.E.; Migdal, A.A.

    1992-01-01

    In this paper, Four-Dimensional Simplicial Quantum Gravity is simulated using the dynamical triangulation approach. The authors studied simplicial manifolds of spherical topology and found the critical line for the cosmological constant as a function of the gravitational one, separating the phases of opened and closed Universe. When the bare cosmological constant approaches this line from above, the four-volume grows: the authors reached about 5 x 10 4 simplexes, which proved to be sufficient for the statistical limit of infinite volume. However, for the genuine continuum theory of gravity, the parameters of the lattice model should be further adjusted to reach the second order phase transition point, where the correlation length grows to infinity. The authors varied the gravitational constant, and they found the first order phase transition, similar to the one found in three-dimensional model, except in 4D the fluctuations are rather large at the transition point, so that this is close to the second order phase transition. The average curvature in cutoff units is large and positive in one phase (gravity), and small negative in another (antigravity). The authors studied the fractal geometry of both phases, using the heavy particle propagator to define the geodesic map, as well as with the old approach using the shortest lattice paths

  20. Methodological triangulation in work life research

    DEFF Research Database (Denmark)

    Warring, Niels

    Based on examples from two research projects on preschool teachers' work, the paper will discuss potentials and challenges in methodological triangulation in work life research. Analysis of ethnographic and phenomenological inspired observations of everyday life in day care centers formed the basis...... for individual interviews and informal talks with employees. The interviews and conversations were based on a critical hermeneutic approach. The analysis of observations and interviews constituted a knowledge base as the project went in to the last phase: action research workshops. In the workshops findings from...

  1. An in-situ measuring method for planar straightness error

    Science.gov (United States)

    Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie

    2018-01-01

    According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.

  2. Classification and Filtering of Constrained Delaunay Triangulation for Automated Building Aggregation

    Directory of Open Access Journals (Sweden)

    GUO Peipei

    2016-08-01

    Full Text Available Building aggregation is an important part of research on large scale map generalization. A triangulation based approach is proposed from the perspective of shape features, six measure parameters of triangles in a constrained Delaunay triangulation are proposed. First of all, use the six measure parameters to determine which triangles are retained and which are erased. Then, the contours of retained triangles, as bridge areas between buildings, are automatically identified and right angle processed. And then, the buildings are aggregated with right angle feature retained by merging the bridge areas with connecting buildings. Finally, the approach is verified by being carried out on actual data. Experimental result shows that it is efficient and practical.

  3. The Extraction of Road Boundary from Crowdsourcing Trajectory Using Constrained Delaunay Triangulation

    Directory of Open Access Journals (Sweden)

    YANG Wei

    2017-02-01

    Full Text Available Extraction of road boundary accurately from crowdsourcing trajectory lines is still a hard work.Therefore,this study presented a new approach to use vehicle trajectory lines to extract road boundary.Firstly, constructing constrained Delaunay triangulation within interpolated track lines to calculate road boundary descriptors using triangle edge length and Voronoi cell.Road boundary recognition model was established by integrating the two boundary descriptors.Then,based on seed polygons,a regional growing method was proposed to extract road boundary. Finally, taxi GPS traces in Beijing were used to verify the validity of the novel method, and the results also showed that our method was suitable for GPS traces with disparity density,complex road structure and different time interval.

  4. Reconstructing Surface Triangulations by Their Intersection Matrices 26 September 2014

    Directory of Open Access Journals (Sweden)

    Arocha Jorge L.

    2015-08-01

    Full Text Available The intersection matrix of a simplicial complex has entries equal to the rank of the intersecction of its facets. We prove that this matrix is enough to define up to isomorphism a triangulation of a surface.

  5. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    OpenAIRE

    Bi, Qiao; Guo, Liu; Ruda, H. E.

    2010-01-01

    A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  6. Lymphoscintigraphy and triangulated body marking for morbidity reduction during sentinel node biopsy in breast cancer.

    Science.gov (United States)

    Krynyckyi, Borys R; Shafir, Michail K; Kim, Suk Chul; Kim, Dong Wook; Travis, Arlene; Moadel, Renee M; Kim, Chun K

    2005-11-08

    Current trends in patient care include the desire for minimizing invasiveness of procedures and interventions. This aim is reflected in the increasing utilization of sentinel lymph node biopsy, which results in a lower level of morbidity in breast cancer staging, in comparison to extensive conventional axillary dissection. Optimized lymphoscintigraphy with triangulated body marking is a clinical option that can further reduce morbidity, more than when a hand held gamma probe alone is utilized. Unfortunately it is often either overlooked or not fully understood, and thus not utilized. This results in the unnecessary loss of an opportunity to further reduce morbidity. Optimized lymphoscintigraphy and triangulated body marking provides a detailed 3 dimensional map of the number and location of the sentinel nodes, available before the first incision is made. The number, location, relevance based on time/sequence of appearance of the nodes, all can influence 1) where the incision is made, 2) how extensive the dissection is, and 3) how many nodes are removed. In addition, complex patterns can arise from injections. These include prominent lymphatic channels, pseudo-sentinel nodes, echelon and reverse echelon nodes and even contamination, which are much more difficult to access with the probe only. With the detailed information provided by optimized lymphoscintigraphy and triangulated body marking, the surgeon can approach the axilla in a more enlightened fashion, in contrast to when the less informed probe only method is used. This allows for better planning, resulting in the best cosmetic effect and less trauma to the tissues, further reducing morbidity while maintaining adequate sampling of the sentinel node(s).

  7. Depth measurements of drilled holes in bone by laser triangulation for the field of oral implantology

    Science.gov (United States)

    Quest, D.; Gayer, C.; Hering, P.

    2012-01-01

    Laser osteotomy is one possible method of preparing beds for dental implants in the human jaw. A major problem in using this contactless treatment modality is the lack of haptic feedback to control the depth while drilling the implant bed. A contactless measurement system called laser triangulation is presented as a new procedure to overcome this problem. Together with a tomographic picture the actual position of the laser ablation in the bone can be calculated. Furthermore, the laser response is sufficiently fast as to pose little risk to surrounding sensitive areas such as nerves and blood vessels. In the jaw two different bone structures exist, namely the cancellous bone and the compact bone. Samples of both bone structures were examined with test drillings performed either by laser osteotomy or by a conventional rotating drilling tool. The depth of these holes was measured using laser triangulation. The results and the setup are reported in this study.

  8. Two Strategies for Qualitative Content Analysis: An Intramethod Approach to Triangulation.

    Science.gov (United States)

    Renz, Susan M; Carrington, Jane M; Badger, Terry A

    2018-04-01

    The overarching aim of qualitative research is to gain an understanding of certain social phenomena. Qualitative research involves the studied use and collection of empirical materials, all to describe moments and meanings in individuals' lives. Data derived from these various materials require a form of analysis of the content, focusing on written or spoken language as communication, to provide context and understanding of the message. Qualitative research often involves the collection of data through extensive interviews, note taking, and tape recording. These methods are time- and labor-intensive. With the advances in computerized text analysis software, the practice of combining methods to analyze qualitative data can assist the researcher in making large data sets more manageable and enhance the trustworthiness of the results. This article will describe a novel process of combining two methods of qualitative data analysis, or Intramethod triangulation, as a means to provide a deeper analysis of text.

  9. Quantum Computing in Decoherence-Free Subspace Constructed by Triangulation

    Directory of Open Access Journals (Sweden)

    Qiao Bi

    2010-01-01

    Full Text Available A formalism for quantum computing in decoherence-free subspaces is presented. The constructed subspaces are partial triangulated to an index related to environment. The quantum states in the subspaces are just projected states which are ruled by a subdynamic kinetic equation. These projected states can be used to perform ideal quantum logical operations without decoherence.

  10. Matching fields and lattice points of simplices

    OpenAIRE

    Loho, Georg; Smith, Ben

    2018-01-01

    We show that the Chow covectors of a linkage matching field define a bijection of lattice points and we demonstrate how one can recover the linkage matching field from this bijection. This resolves two open questions from Sturmfels & Zelevinsky (1993) on linkage matching fields. For this, we give an explicit construction that associates a bipartite incidence graph of an ordered partition of a common set to all lattice points in a dilated simplex. Given a triangulation of a product of two simp...

  11. A Novel Model of Conforming Delaunay Triangulation for Sensor Network Configuration

    Directory of Open Access Journals (Sweden)

    Yan Ma

    2015-01-01

    Full Text Available Delaunay refinement is a technique for generating unstructured meshes of triangles for sensor network configuration engineering practice. A new method for solving Delaunay triangulation problem is proposed in this paper, which is called endpoint triangle’s circumcircle model (ETCM. As compared with the original fractional node refinement algorithms, the proposed algorithm can get well refinement stability with least time cost. Simulations are performed under five aspects including refinement stability, the number of additional nodes, time cost, mesh quality after intruding additional nodes, and the aspect ratio improved by single additional node. All experimental results show the advantages of the proposed algorithm as compared with the existing algorithms and confirm the algorithm analysis sufficiently.

  12. Proposals for the Operationalisation of the Discourse Theory of Laclau and Mouffe Using a Triangulation of Lexicometrical and Interpretative Methods

    Directory of Open Access Journals (Sweden)

    Georg Glasze

    2007-05-01

    Full Text Available The discourse theory of Ernesto LACLAU and Chantal MOUFFE brings together three elements: the FOUCAULTian notion of discourse, the (post- MARXist notion of hegemony, and the poststructuralist writings of Jacques DERRIDA and Roland BARTHES. Discourses are regarded as temporary fixations of differential relations. Meaning, i.e. any social "objectivity", is conceptualised as an effect of such a fixation. The discussion on an appropriate operationalisation of such a discourse theory is just beginning. In this paper, it is argued that a triangulation of two linguistic methods is appropriate to reveal temporary fixations: by means of corpus-driven lexicometric procedures as well as by the analysis of narrative patterns, the regularities of the linkage of elements can be analysed (for example, in diachronic comparisons. The example of a geographic research project shows how, in so doing, the historically contingent constitution of an international community and "world region" can be analysed. URN: urn:nbn:de:0114-fqs0702143

  13. Positioning, alignment and absolute pointing of the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Fehr, F; Distefano, C

    2010-01-01

    A precise detector alignment and absolute pointing is crucial for point-source searches. The ANTARES neutrino telescope utilises an array of hydrophones, tiltmeters and compasses for the relative positioning of the optical sensors. The absolute calibration is accomplished by long-baseline low-frequency triangulation of the acoustic reference devices in the deep-sea with a differential GPS system at the sea surface. The absolute pointing can be independently verified by detecting the shadow of the Moon in cosmic rays.

  14. Post-Processing in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars Vabbersgaard

    The material-point method (MPM) is a numerical method for dynamic or static analysis of solids using a discretization in time and space. The method has shown to be successful in modelling physical problems involving large deformations, which are difficult to model with traditional numerical tools...... such as the finite element method. In the material-point method, a set of material points is utilized to track the problem in time and space, while a computational background grid is utilized to obtain spatial derivatives relevant to the physical problem. Currently, the research within the material-point method......-point method. The first idea involves associating a volume with each material point and displaying the deformation of this volume. In the discretization process, the physical domain is divided into a number of smaller volumes each represented by a simple shape; here quadrilaterals are chosen for the presented...

  15. Measuring and Controlling Fairness of Triangulations

    KAUST Repository

    Jiang, Caigui

    2016-09-30

    The fairness of meshes that represent geometric shapes is a topic that has been studied extensively and thoroughly. However, the focus in such considerations often is not on the mesh itself, but rather on the smooth surface approximated by it, and fairness essentially expresses a mesh’s suitability for purposes such as visualization or simulation. This paper focusses on meshes in the architectural context, where vertices, edges, and faces of meshes are often highly visible, and any notion of fairness must take new aspects into account. We use concepts from discrete differential geometry (star-shaped Gauss images) to express fairness, and we also demonstrate how fairness can be incorporated into interactive geometric design of triangulated freeform skins.

  16. Chromatic polynomials of planar triangulations, the Tutte upper bound and chromatic zeros

    International Nuclear Information System (INIS)

    Shrock, Robert; Xu Yan

    2012-01-01

    Tutte proved that if G pt is a planar triangulation and P(G pt , q) is its chromatic polynomial, then |P(G pt , τ + 1)| ⩽ (τ − 1) n−5 , where τ=(1+√5 )/2 and n is the number of vertices in G pt . Here we study the ratio r(G pt ) = |P(G pt , τ + 1)|/(τ − 1) n−5 for a variety of planar triangulations. We construct infinite recursive families of planar triangulations G pt,m depending on a parameter m linearly related to n and show that if P(G pt,m , q) only involves a single power of a polynomial, then r(G pt,m ) approaches zero exponentially fast as n → ∞. We also construct infinite recursive families for which P(G pt,m , q) is a sum of powers of certain functions and show that for these, r(G pt,m ) may approach a finite nonzero constant as n → ∞. The connection between the Tutte upper bound and the observed chromatic zero(s) near to τ + 1 is investigated. We report the first known graph for which the zero(s) closest to τ + 1 is not real, but instead is a complex-conjugate pair. Finally, we discuss connections with the nonzero ground-state entropy of the Potts antiferromagnet on these families of graphs. (paper)

  17. Quantum gravity from simplices: analytical investigations of causal dynamical triangulations

    NARCIS (Netherlands)

    Benedetti, D.

    2007-01-01

    A potentially powerful approach to quantum gravity has been developed over the last few years under the name of Causal Dynamical Triangulations. Although these models can be solved exactly in a variety of ways in the case of pure gravity in (1+1) dimensions,it is difficult to extend any of the

  18. Investigating methods of stream planform identification

    Science.gov (United States)

    Lohberg, M. M.; Lusk, K.; Miller, D.; Stonedahl, F.; Stonedahl, S. H.

    2013-12-01

    Stream planforms are used to map scientific measurements, estimate volumetric discharge, and model stream flow. Changes in these planforms can be used to quantify erosion and water level fluctuations. This research investigated five cost-effective methods of identifying stream planforms: (1) consumer-grade digital camera GPS (2) multi-view stereo 3D scene reconstruction (using Microsoft Photosynth (TM)) (3) a cross-sectional measurement approach (4) a triangulation-based measurement approach and (5) the 'square method' - a novel photogrammetric procedure which involved floating a large wooden square in the stream, photographing the square and banks from numerous angles and then using the square to correct for perspective and extract the outline (using custom post-processing software). Data for each of the five methods was collected at Blackhawk Creek in Davenport, Iowa. Additionally we placed 30 control points near the banks of the stream and measured 88 lengths between these control points. We measured or calculated the locations of these control points with each of our five methods and calculated the average percent error associated with each method using the predicted control point locations. The effectiveness of each method was evaluated in terms of accuracy, affordability, environmental intrusiveness, and ease of use. The camera equipped with GPS proved to be a very ineffective method due to an extremely high level of error, 289%. The 3D point cloud extracted from Photosynth was missing markers for many of the control points, so the error calculation (which yielded 11.7%) could only be based on five of the 88 lengths and is thus highly uncertain. The two non-camera methods (cross-sectional and triangulation measurements) resulted in low percent error (2.04% and 1.31% respectively) relative to the control point lengths, but these methods were very time consuming, exhausting, and only provided low resolution outlines. High resolution data collection would

  19. (2+1)-dimensional quantum gravity as the continuum limit of causal dynamical triangulations

    International Nuclear Information System (INIS)

    Benedetti, D.; Loll, R.; Zamponi, F.

    2007-01-01

    We perform a nonperturbative sum over geometries in a (2+1)-dimensional quantum gravity model given in terms of causal dynamical triangulations. Inspired by the concept of triangulations of product type introduced previously, we impose an additional notion of order on the discrete, causal geometries. This simplifies the combinatorial problem of counting geometries just enough to enable us to calculate the transfer matrix between boundary states labeled by the area of the spatial universe, as well as the corresponding quantum Hamiltonian of the continuum theory. This is the first time in dimension larger than 2 that a Hamiltonian has been derived from such a model by mainly analytical means, and it opens the way for a better understanding of scaling and renormalization issues

  20. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  1. Three-Dimensional Reconstruction Optical System Using Shadows Triangulation

    Science.gov (United States)

    Barba, J. Leiner; Vargas, Q. Lorena; Torres, M. Cesar; Mattos, V. Lorenzo

    2008-04-01

    In this work is developed a three-dimensional reconstruction system using the Shades3D tool of the Matlab® Programming Language and materials of low cost, such as webcam camera, a stick, a weak structured lighting system composed by a desk lamp, and observation plane in which the object is located. The reconstruction is obtained through a triangulation process that is executed after acquiring a sequence of images of the scene with a shadow projected on the object; additionally an image filtering process is done for obtaining only the part of the scene that will be reconstructed. Previously, it is necessary to develop a calibration process for determining the internal camera geometric and optical characteristics (intrinsic parameters), and the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters). The lamp and the stick are used to produce a shadow which scans the object; in this technique, it is not necessary to know the position of the light source, instead the triangulation is obtained using shadow plane produced by intersection between the stick and the illumination pattern. The webcam camera captures all images with the shadow scanning the object, and Shades3D tool processes all information taking into account captured images and calibration parameters. Likewise, this technique is evaluated in the reconstruction of parts of the human body and its application in the detection of external abnormalities and elaboration of prosthesis or implant.

  2. Triangulation of written assessments from patients, teachers and students: useful for students and teachers?

    Science.gov (United States)

    Gran, Sarah Frandsen; Braend, Anja Maria; Lindbaek, Morten

    2010-01-01

    Many medical students in general practice clerkships experience lack of observation-based feedback. The StudentPEP project combined written feedback from patients, observing teachers and students. This study analyzes the perceived usefulness of triangulated written feedback. A total of 71 general practitioners and 79 medical students at the University of Oslo completed project evaluation forms after a 6-week clerkship. A principal component analysis was performed to find structures within the questionnaire. Regression analysis was performed regarding students' answers to whether StudentPEP was worthwhile. Free-text answers were analyzed qualitatively. Student and teacher responses were mixed within six subscales, with highest agreement on 'Teachers oral and written feedback' and 'Attitude to patient evaluation'. Fifty-four per cent of the students agreed that the triangulation gave concrete feedback on their weaknesses, and 59% valued the teachers' feedback provided. Two statements regarding the teacher's attitudes towards StudentPEP were significantly associated with the student's perception of worthwhileness. Qualitative analysis showed that patient evaluations were encouraging or distrusted. Some students thought that StudentPEP ensured observation and feedback. The patient evaluations increased the students' awareness of the patient perspective. A majority of the students considered the triangulated written feedback beneficial, although time-consuming. The teacher's attitudes strongly influenced how the students perceived the usefulness of StudentPEP.

  3. Exploring Torus Universes in Causal Dynamical Triangulations

    DEFF Research Database (Denmark)

    Budd, Timothy George; Loll, R.

    2013-01-01

    Motivated by the search for new observables in nonperturbative quantum gravity, we consider Causal Dynamical Triangulations (CDT) in 2+1 dimensions with the spatial topology of a torus. This system is of particular interest, because one can study not only the global scale factor, but also global...... shape variables in the presence of arbitrary quantum fluctuations of the geometry. Our initial investigation focusses on the dynamics of the scale factor and uncovers a qualitatively new behaviour, which leads us to investigate a novel type of boundary conditions for the path integral. Comparing large....... Apart from setting the stage for the analysis of shape dynamics on the torus, the new set-up highlights the role of nontrivial boundaries and topology....

  4. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  5. Euclidean Dynamical Triangulation revisited: is the phase transition really 1st order?

    International Nuclear Information System (INIS)

    Rindlisbacher, Tobias; Forcrand, Philippe de

    2015-01-01

    The transition between the two phases of 4D Euclidean Dynamical Triangulation (http://dx.doi.org/10.1016/0370-2693(92)90709-D) was long believed to be of second order until in 1996 first order behavior was found for sufficiently large systems (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4). However, one may wonder if this finding was affected by the numerical methods used: to control volume fluctuations, in both studies (http://dx.doi.org/10.1016/0550-3213(96)00214-3, http://dx.doi.org/10.1016/S0370-2693(96)01277-4) an artificial harmonic potential was added to the action and in (http://dx.doi.org/10.1016/S0370-2693(96)01277-4) measurements were taken after a fixed number of accepted instead of attempted moves which introduces an additional error. Finally the simulations suffer from strong critical slowing down which may have been underestimated. In the present work, we address the above weaknesses: we allow the volume to fluctuate freely within a fixed interval; we take measurements after a fixed number of attempted moves; and we overcome critical slowing down by using an optimized parallel tempering algorithm (http://dx.doi.org/10.1088/1742-5468/2010/01/P01020). With these improved methods, on systems of size up to N_4=64k 4-simplices, we confirm that the phase transition is 1"s"t order. In addition, we discuss a local criterion to decide whether parts of a triangulation are in the elongated or crumpled state and describe a new correspondence between EDT and the balls in boxes model. The latter gives rise to a modified partition function with an additional, third coupling. Finally, we propose and motivate a class of modified path-integral measures that might remove the metastability of the Markov chain and turn the phase transition into 2"n"d order.

  6. The Family System and Depressive Symptoms during the College Years: Triangulation, Parental Differential Treatment, and Sibling Warmth as Predictors.

    Science.gov (United States)

    Ponappa, Sujata; Bartle-Haring, Suzanne; Holowacz, Eugene; Ferriby, Megan

    2017-01-01

    Guided by Bowen theory, we investigated the relationships between parent-child triangulation, parental differential treatment (PDT), sibling warmth, and individual depressive symptoms in a sample of 77 sibling dyads, aged 18-25 years, recruited through undergraduate classes at a U.S. public University. Results of the actor-partner interdependence models suggested that being triangulated into parental conflict was positively related to both siblings' perception of PDT; however, as one sibling felt triangulated, the other perceived reduced levels of PDT. For both siblings, the perception of higher levels of PDT was related to decreased sibling warmth and higher sibling warmth was associated with fewer depressive symptoms. The implications of these findings for research and the treatment of depression in the college-aged population are discussed. © 2016 American Association for Marriage and Family Therapy.

  7. HIERARCHICAL REGULARIZATION OF POLYGONS FOR PHOTOGRAMMETRIC POINT CLOUDS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Xie

    2017-05-01

    Full Text Available Despite the success of multi-view stereo (MVS reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  8. Theoretical triangulation as an approach for revealing the complexity of a classroom discussion

    NARCIS (Netherlands)

    van Drie, J.; Dekker, R.

    2013-01-01

    In this paper we explore the value of theoretical triangulation as a methodological approach for the analysis of classroom interaction. We analyze an excerpt of a whole-class discussion in history from three theoretical perspectives: interactivity of the discourse, conceptual level raising and

  9. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  10. Location of the axis of underground mines by preliminary terrain plotting

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, M

    1979-11-01

    Describes a method for locating underground mines by limited-range surveys. The method can be used if both the entrance and exit of an underground mine can be observed from higher ground, either from one point or from two visually connected points. The method combines open traverse and working on to line procedures; transit and optical range finders are used to establish the direction and length of the mine; these data are then integrated in the official triangulation network by measuring angles and distances to the nearest triangulation points. The method is advantageous in that it eliminates the paraphernalia of the standard triangulation method, reduces time of the survey to 15-20 days, saves labor and supplies and enables a visual control of operations. (In Bulgarian)

  11. Quantum triangulations moduli space, quantum computing, non-linear sigma models and Ricci flow

    CERN Document Server

    Carfora, Mauro

    2017-01-01

    This book discusses key conceptual aspects and explores the connection between triangulated manifolds and quantum physics, using a set of case studies ranging from moduli space theory to quantum computing to provide an accessible introduction to this topic. Research on polyhedral manifolds often reveals unexpected connections between very distinct aspects of mathematics and physics. In particular, triangulated manifolds play an important role in settings such as Riemann moduli space theory, strings and quantum gravity, topological quantum field theory, condensed matter physics, critical phenomena and complex systems. Not only do they provide a natural discrete analogue to the smooth manifolds on which physical theories are typically formulated, but their appearance is also often a consequence of an underlying structure that naturally calls into play non-trivial aspects of representation theory, complex analysis and topology in a way that makes the basic geometric structures of the physical interactions involv...

  12. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  13. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  14. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry; von Glehn, Ingrid; Macdonald, Colin B.; Marz, Thomas

    2013-01-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  15. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  16. Improved laser-based triangulation sensor with enhanced range and resolution through adaptive optics-based active beam control.

    Science.gov (United States)

    Reza, Syed Azer; Khwaja, Tariq Shamim; Mazhar, Mohsin Ali; Niazi, Haris Khan; Nawab, Rahma

    2017-07-20

    Various existing target ranging techniques are limited in terms of the dynamic range of operation and measurement resolution. These limitations arise as a result of a particular measurement methodology, the finite processing capability of the hardware components deployed within the sensor module, and the medium through which the target is viewed. Generally, improving the sensor range adversely affects its resolution and vice versa. Often, a distance sensor is designed for an optimal range/resolution setting depending on its intended application. Optical triangulation is broadly classified as a spatial-signal-processing-based ranging technique and measures target distance from the location of the reflected spot on a position sensitive detector (PSD). In most triangulation sensors that use lasers as a light source, beam divergence-which severely affects sensor measurement range-is often ignored in calculations. In this paper, we first discuss in detail the limitations to ranging imposed by beam divergence, which, in effect, sets the sensor dynamic range. Next, we show how the resolution of laser-based triangulation sensors is limited by the interpixel pitch of a finite-sized PSD. In this paper, through the use of tunable focus lenses (TFLs), we propose a novel design of a triangulation-based optical rangefinder that improves both the sensor resolution and its dynamic range through adaptive electronic control of beam propagation parameters. We present the theory and operation of the proposed sensor and clearly demonstrate a range and resolution improvement with the use of TFLs. Experimental results in support of our claims are shown to be in strong agreement with theory.

  17. Visualization of 2-D and 3-D fields from its value in a finite number of points

    International Nuclear Information System (INIS)

    Dari, E.A.; Venere, M.J.

    1990-01-01

    This work describes a method for the visualization of two- and three-dimensional fields, given its value in a finite number of points. These data can be originated in experimental measurements, numerical results, or any other source. For the field interpolation, the space is divided into simplices (triangles or tetrahedrons), using the Watson algorithm to obtain the Delaunay triangulation. Inside each simplex, linear interpolation is assumed. The visualization is accomplished by means of Finite Elements post-processors, capable of handling unstructured meshes, which were also developed by the authors. (Author) [es

  18. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  19. DETECTION OF SLOPE MOVEMENT BY COMPARING POINT CLOUDS CREATED BY SFM SOFTWARE

    Directory of Open Access Journals (Sweden)

    K. Oda

    2016-06-01

    Full Text Available This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis. Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a

  20. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  1. An Array of Qualitative Data Analysis Tools: A Call for Data Analysis Triangulation

    Science.gov (United States)

    Leech, Nancy L.; Onwuegbuzie, Anthony J.

    2007-01-01

    One of the most important steps in the qualitative research process is analysis of data. The purpose of this article is to provide elements for understanding multiple types of qualitative data analysis techniques available and the importance of utilizing more than one type of analysis, thus utilizing data analysis triangulation, in order to…

  2. Ising model of a randomly triangulated random surface as a definition of fermionic string theory

    International Nuclear Information System (INIS)

    Bershadsky, M.A.; Migdal, A.A.

    1986-01-01

    Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)

  3. Parametric methods for spatial point processes

    DEFF Research Database (Denmark)

    Møller, Jesper

    is studied in Section 4, and Bayesian inference in Section 5. On one hand, as the development in computer technology and computational statistics continues,computationally-intensive simulation-based methods for likelihood inference probably will play a increasing role for statistical analysis of spatial...... inference procedures for parametric spatial point process models. The widespread use of sensible but ad hoc methods based on summary statistics of the kind studied in Chapter 4.3 have through the last two decades been supplied by likelihood based methods for parametric spatial point process models......(This text is submitted for the volume ‘A Handbook of Spatial Statistics' edited by A.E. Gelfand, P. Diggle, M. Fuentes, and P. Guttorp, to be published by Chapmand and Hall/CRC Press, and planned to appear as Chapter 4.4 with the title ‘Parametric methods'.) 1 Introduction This chapter considers...

  4. Technique Triangulation for Validation in Directed Content Analysis

    Directory of Open Access Journals (Sweden)

    Áine M. Humble PhD

    2009-09-01

    Full Text Available Division of labor in wedding planning varies for first-time marriages, with three types of couples—traditional, transitional, and egalitarian—identified, but nothing is known about wedding planning for remarrying individuals. Using semistructured interviews, the author interviewed 14 couples in which at least one person had remarried and used directed content analysis to investigate the extent to which the aforementioned typology could be transferred to this different context. In this paper she describes how a triangulation of analytic techniques provided validation for couple classifications and also helped with moving beyond “blind spots” in data analysis. Analytic approaches were the constant comparative technique, rank order comparison, and visual representation of coding, using MAXQDA 2007's tool called TextPortraits.

  5. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  6. Fixed-point data-collection method of video signal

    International Nuclear Information System (INIS)

    Tang Yu; Yin Zejie; Qian Weiming; Wu Xiaoyi

    1997-01-01

    The author describes a Fixed-point data-collection method of video signal. The method provides an idea of fixed-point data-collection, and has been successfully applied in the research of real-time radiography on dose field, a project supported by National Science Fund

  7. A new comparison method for dew-point generators

    Science.gov (United States)

    Heinonen, Martti

    1999-12-01

    A new method for comparing dew-point generators was developed at the Centre for Metrology and Accreditation. In this method, the generators participating in a comparison are compared with a transportable saturator unit using a dew-point comparator. The method was tested by constructing a test apparatus and by comparing it with the MIKES primary dew-point generator several times in the dew-point temperature range from -40 to +75 °C. The expanded uncertainty (k = 2) of the apparatus was estimated to be between 0.05 and 0.07 °C and the difference between the comparator system and the generator is well within these limits. In particular, all of the results obtained in the range below 0 °C are within ±0.03 °C. It is concluded that a new type of a transfer standard with characteristics most suitable for dew-point comparisons can be developed on the basis of the principles presented in this paper.

  8. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  9. Analysis of Stress Updates in the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    The material-point method (MPM) is a new numerical method for analysis of large strain engineering problems. The MPM applies a dual formulation, where the state of the problem (mass, stress, strain, velocity etc.) is tracked using a finite set of material points while the governing equations...... are solved on a background computational grid. Several references state, that one of the main advantages of the material-point method is the easy application of complicated material behaviour as the constitutive response is updated individually for each material point. However, as discussed here, the MPM way...

  10. THE GROWTH POINTS OF STATISTICAL METHODS

    OpenAIRE

    Orlov A. I.

    2014-01-01

    On the basis of a new paradigm of applied mathematical statistics, data analysis and economic-mathematical methods are identified; we have also discussed five topical areas in which modern applied statistics is developing as well as the other statistical methods, i.e. five "growth points" – nonparametric statistics, robustness, computer-statistical methods, statistics of interval data, statistics of non-numeric data

  11. The Marginalized "Model" Minority: An Empirical Examination of the Racial Triangulation of Asian Americans

    Science.gov (United States)

    Xu, Jun; Lee, Jennifer C.

    2013-01-01

    In this article, we propose a shift in race research from a one-dimensional hierarchical approach to a multidimensional system of racial stratification. Building upon Claire Kim's (1999) racial triangulation theory, we examine how the American public rates Asians relative to blacks and whites along two dimensions of racial stratification: racial…

  12. First Instances of Generalized Expo-Rational Finite Elements on Triangulations

    Science.gov (United States)

    Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre

    2011-12-01

    In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.

  13. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  14. Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Science.gov (United States)

    Sun, Shaohui

    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a "divide-and-conquer" scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected

  15. Making the Most of Obesity Research: Developing Research and Policy Objectives through Evidence Triangulation

    Science.gov (United States)

    Oliver, Kathryn; Aicken, Catherine; Arai, Lisa

    2013-01-01

    Drawing lessons from research can help policy makers make better decisions. If a large and methodologically varied body of research exists, as with childhood obesity, this is challenging. We present new research and policy objectives for child obesity developed by triangulating user involvement data with a mapping study of interventions aimed at…

  16. Triangulation-based edge measurement using polyview optics

    Science.gov (United States)

    Li, Yinan; Kästner, Markus; Reithmeier, Eduard

    2018-04-01

    Laser triangulation sensors as non-contact measurement devices are widely used in industry and research for profile measurements and quantitative inspections. Some technical applications e.g. edge measurements usually require a configuration of a single sensor and a translation stage or a configuration of multiple sensors, so that they can measure a large measurement range that is out of the scope of a single sensor. However, the cost of both configurations is high, due to the additional rotational axis or additional sensor. This paper provides a special measurement system for measurement of great curved surfaces based on a single sensor configuration. Utilizing a self-designed polyview optics and calibration process, the proposed measurement system allows an over 180° FOV (field of view) with a precise measurement accuracy as well as an advantage of low cost. The detailed capability of this measurement system based on experimental data is discussed in this paper.

  17. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    Science.gov (United States)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  18. Efficient point cloud data processing in shipbuilding: Reformative component extraction method and registration method

    Directory of Open Access Journals (Sweden)

    Jingyu Sun

    2014-07-01

    Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.

  19. Material-Point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2007-01-01

    The aim of this paper is to test different types of spatial interpolation for the material-point method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  20. Triangulating case-finding tools for patient safety surveillance: a cross-sectional case study of puncture/laceration.

    Science.gov (United States)

    Taylor, Jennifer A; Gerwin, Daniel; Morlock, Laura; Miller, Marlene R

    2011-12-01

    To evaluate the need for triangulating case-finding tools in patient safety surveillance. This study applied four case-finding tools to error-associated patient safety events to identify and characterise the spectrum of events captured by these tools, using puncture or laceration as an example for in-depth analysis. Retrospective hospital discharge data were collected for calendar year 2005 (n=48,418) from a large, urban medical centre in the USA. The study design was cross-sectional and used data linkage to identify the cases captured by each of four case-finding tools. Three case-finding tools (International Classification of Diseases external (E) and nature (N) of injury codes, Patient Safety Indicators (PSI)) were applied to the administrative discharge data to identify potential patient safety events. The fourth tool was Patient Safety Net, a web-based voluntary patient safety event reporting system. The degree of mutual exclusion among detection methods was substantial. For example, when linking puncture or laceration on unique identifiers, out of 447 potential events, 118 were identical between PSI and E-codes, 152 were identical between N-codes and E-codes and 188 were identical between PSI and N-codes. Only 100 events that were identified by PSI, E-codes and N-codes were identical. Triangulation of multiple tools through data linkage captures potential patient safety events most comprehensively. Existing detection tools target patient safety domains differently, and consequently capture different occurrences, necessitating the integration of data from a combination of tools to fully estimate the total burden.

  1. Hand-held triangulation laser profilometer with audio output for blind people Profilométre laser à triangulation tenu en main avec sortie sonare pour non-voyants

    Science.gov (United States)

    Farcy, R.; Damaschini, R.

    1998-06-01

    We describe a device currently under industrial development which will give to the blind a means of three-dimensional space perception. It consists of a 350 g hand-held triangulating laser telemeter including electronic parts and batteries, with auditory feedback either inside the apparatus or close to the ear. The microprocessor unit converts in real time the distance measured by the telemeter into a musical note. Scanning the space with an adequate movement of the hand produces musical lines corresponding to the profiles of the environment. We discuss the optical configuration of the system relative to our first year of clinical experimentation.

  2. Restoration of an object from its complex cross sections and surface smoothing of the object

    International Nuclear Information System (INIS)

    Agui, Takeshi; Arai, Kiyoshi; Nakajima, Masayuki

    1990-01-01

    In clinical medicine, restoring the surface of a three-dimensional object from its set of parallel cross sections obtained by CT or MRI is useful in diagnoses. A method of connecting a pair of contours on neighboring cross sections to each other by triangular patches is generally used for this restoration. This method, however, has the complexity of triangulation algorithm, and requires the numerous quantity of calculations when surface smoothing is executed. In our new method, the positions of sampling points are expressed in cylindrical coordinates. Sampling points including auxiliary points are extracted and connected using simple algorithm. Surface smoothing is executed by moving sampling points. This method extends the application scope of restoring objects by triangulation. (author)

  3. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  4. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  5. C-point and V-point singularity lattice formation and index sign conversion methods

    Science.gov (United States)

    Kumar Pal, Sushanta; Ruchi; Senthilkumaran, P.

    2017-06-01

    The generic singularities in an ellipse field are C-points namely stars, lemons and monstars in a polarization distribution with C-point indices (-1/2), (+1/2) and (+1/2) respectively. Similar to C-point singularities, there are V-point singularities that occur in a vector field and are characterized by Poincare-Hopf index of integer values. In this paper we show that the superposition of three homogenously polarized beams in different linear states leads to the formation of polarization singularity lattice. Three point sources at the focal plane of the lens are used to create three interfering plane waves. A radial/azimuthal polarization converter (S-wave plate) placed near the focal plane modulates the polarization states of the three beams. The interference pattern is found to host C-points and V-points in a hexagonal lattice. The C-points occur at intensity maxima and V-points occur at intensity minima. Modulating the state of polarization (SOP) of three plane waves from radial to azimuthal does not essentially change the nature of polarization singularity lattice as the Poincare-Hopf index for both radial and azimuthal polarization distributions is (+1). Hence a transformation from a star to a lemon is not trivial, as such a transformation requires not a single SOP change, but a change in whole spatial SOP distribution. Further there is no change in the lattice structure and the C- and V-points appear at locations where they were present earlier. Hence to convert an interlacing star and V-point lattice into an interlacing lemon and V-point lattice, the interferometer requires modification. We show for the first time a method to change the polarity of C-point and V-point indices. This means that lemons can be converted into stars and stars can be converted into lemons. Similarly the positive V-point can be converted to negative V-point and vice versa. The intensity distribution in all these lattices is invariant as the SOPs of the three beams are changed in an

  6. From causal dynamical triangulations to astronomical observations

    Science.gov (United States)

    Mielczarek, Jakub

    2017-09-01

    This letter discusses phenomenological aspects of dimensional reduction predicted by the Causal Dynamical Triangulations (CDT) approach to quantum gravity. The deformed form of the dispersion relation for the fields defined on the CDT space-time is reconstructed. Using the Fermi satellite observations of the GRB 090510 source we find that the energy scale of the dimensional reduction is E* > 0.7 \\sqrt{4-d\\text{UV}} \\cdot 1010 \\text{GeV} at (95% CL), where d\\text{UV} is the value of the spectral dimension in the UV limit. By applying the deformed dispersion relation to the cosmological perturbations it is shown that, for a scenario when the primordial perturbations are formed in the UV region, the scalar power spectrum PS \\propto kn_S-1 , where n_S-1≈ \\frac{3 r (d\\text{UV}-2)}{(d\\text{UV}-1)r-48} . Here, r is the tensor-to-scalar ratio. We find that within the considered model, the predicted from CDT deviation from the scale invariance (n_S=1) is in contradiction with the up to date Planck and BICEP2.

  7. Fixed-topology Lorentzian triangulations: Quantum Regge Calculus in the Lorentzian domain

    Science.gov (United States)

    Tate, Kyle; Visser, Matt

    2011-11-01

    A key insight used in developing the theory of Causal Dynamical Triangu-lations (CDTs) is to use the causal (or light-cone) structure of Lorentzian manifolds to restrict the class of geometries appearing in the Quantum Gravity (QG) path integral. By exploiting this structure the models developed in CDTs differ from the analogous models developed in the Euclidean domain, models of (Euclidean) Dynamical Triangulations (DT), and the corresponding Lorentzian results are in many ways more "physical". In this paper we use this insight to formulate a Lorentzian signature model that is anal-ogous to the Quantum Regge Calculus (QRC) approach to Euclidean Quantum Gravity. We exploit another crucial fact about the structure of Lorentzian manifolds, namely that certain simplices are not constrained by the triangle inequalities present in Euclidean signa-ture. We show that this model is not related to QRC by a naive Wick rotation; this serves as another demonstration that the sum over Lorentzian geometries is not simply related to the sum over Euclidean geometries. By removing the triangle inequality constraints, there is more freedom to perform analytical calculations, and in addition numerical simulations are more computationally efficient. We first formulate the model in 1 + 1 dimensions, and derive scaling relations for the pure gravity path integral on the torus using two different measures. It appears relatively easy to generate "large" universes, both in spatial and temporal extent. In addition, loopto-loop amplitudes are discussed, and a transfer matrix is derived. We then also discuss the model in higher dimensions.

  8. An evaluation of orthopaedic nurses’ participation in an educational intervention promoting research utilization – A triangulation convergence model

    DEFF Research Database (Denmark)

    Berthelsen, Connie Bøttcher; Hølge-Hazelton, Bibi

    2016-01-01

    Aims and objectives To describe the orthopaedic nurses' experiences regarding the relevance of an educational intervention and their personal and contextual barriers to participation in the intervention. Background One of the largest barriers against nurses' research usage in clinical practice...... is the lack of participation. A previous survey identified 32 orthopaedic nurses as interested in participating in nursing research. An educational intervention was conducted to increase the orthopaedic nurses' research knowledge and competencies. However, only an average of six nurses participated. Design...... A triangulation convergence model was applied through a mixed methods design to combine quantitative results and qualitative findings for evaluation. Methods Data were collected from 2013–2014 from 32 orthopaedic nurses in a Danish regional hospital through a newly developed 21-item questionnaire and two focus...

  9. Strike Point Control on EAST Using an Isoflux Control Method

    International Nuclear Information System (INIS)

    Xing Zhe; Xiao Bingjia; Luo Zhengping; Walker, M. L.; Humphreys, D. A.

    2015-01-01

    For the advanced tokamak, the particle deposition and thermal load on the divertor is a big challenge. By moving the strike points on divertor target plates, the position of particle deposition and thermal load can be shifted. We could adjust the Poloidal Field (PF) coil current to achieve the strike point position feedback control. Using isoflux control method, the strike point position can be controlled by controlling the X point position. On the basis of experimental data, we establish relational expressions between X point position and strike point position. Benchmark experiments are carried out to validate the correctness and robustness of the control methods. The strike point position is successfully controlled following our command in the EAST operation. (paper)

  10. Novel Ratio Subtraction and Isoabsorptive Point Methods for ...

    African Journals Online (AJOL)

    Purpose: To develop and validate two innovative spectrophotometric methods used for the simultaneous determination of ambroxol hydrochloride and doxycycline in their binary mixture. Methods: Ratio subtraction and isoabsorptive point methods were used for the simultaneous determination of ambroxol hydrochloride ...

  11. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    Directory of Open Access Journals (Sweden)

    Zhanchao Li

    2013-01-01

    Full Text Available The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model and change of sequence distribution law of nonparametric statistical model. On this basis, through the reduction of change point problem, the establishment of basic nonparametric change point model, and asymptotic analysis on test method of basic change point problem, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is created in consideration of the situation that in practice concrete dam crack behavior may have more abnormality points. And the nonparametric change point diagnosis method of concrete dam crack behavior abnormality is used in the actual project, demonstrating the effectiveness and scientific reasonableness of the method established. Meanwhile, the nonparametric change point diagnosis method of concrete dam crack behavior abnormality has a complete theoretical basis and strong practicality with a broad application prospect in actual project.

  12. Pointing Verification Method for Spaceborne Lidars

    Directory of Open Access Journals (Sweden)

    Axel Amediek

    2017-01-01

    Full Text Available High precision acquisition of atmospheric parameters from the air or space by means of lidar requires accurate knowledge of laser pointing. Discrepancies between the assumed and actual pointing can introduce large errors due to the Doppler effect or a wrongly assumed air pressure at ground level. In this paper, a method for precisely quantifying these discrepancies for airborne and spaceborne lidar systems is presented. The method is based on the comparison of ground elevations derived from the lidar ranging data with high-resolution topography data obtained from a digital elevation model and allows for the derivation of the lateral and longitudinal deviation of the laser beam propagation direction. The applicability of the technique is demonstrated by using experimental data from an airborne lidar system, confirming that geo-referencing of the lidar ground spot trace with an uncertainty of less than 10 m with respect to the used digital elevation model (DEM can be obtained.

  13. Triangulation of Methods in Labour Studies in Nigeria: Reflections ...

    African Journals Online (AJOL)

    One of the distinctive aspects of social science research in Nigeria as in other ... method in their investigations while relegating qualitative methods to the background. In labour studies, adopting only quantitative method to studying workers ...

  14. Exploring Forms of Triangulation to Facilitate Collaborative Research Practice: Reflections From a Multidisciplinary Research Group

    Directory of Open Access Journals (Sweden)

    Tarja Tiainen

    2006-10-01

    Full Text Available This article contains critical reflections of a multidisciplinary research group studying the human and technological dynamics around some newly offered electronic services in a specific rural area of Finland. For their research, the group adopted ethnography. On facing the challenges of doing ethnographic research in a multidisciplinary setting, the group evolved its own breed of research practice based on multiple forms of triangulation. This implied the use of multiple data sources, methods, theories, and researchers, in different combinations. One of the outcomes of the work is a model for collaborative research. It highlights, among others, the importance of creating a climate for collaboration within the research group and following a process of individual and collaborative writing to achieve the potential benefits of such research. The article also identifies a set of remaining challenges relevant to collaborative research.

  15. 1:500 Scale Aerial Triangulation Test with Unmanned Airship in Hubei Province

    International Nuclear Information System (INIS)

    Feifei, Xie; Zongjian, Lin; Dezhu, Gui

    2014-01-01

    A new UAVS (Unmanned Aerial Vehicle System) for low altitude aerial photogrammetry is introduced for fine surveying and mapping, including the platform airship, sensor system four-combined wide-angle camera and photogrammetry software MAP-AT. It is demonstrated that this low-altitude aerial photogrammetric system meets the precision requirements of 1:500 scale aerial triangulation based on the test of this system in Hubei province, including the working condition of the airship, the quality of image data and the data processing report. This work provides a possibility for fine surveying and mapping

  16. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  17. Natural Preconditioning and Iterative Methods for Saddle Point Systems

    KAUST Repository

    Pestana, Jennifer

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. The solution of quadratic or locally quadratic extremum problems subject to linear(ized) constraints gives rise to linear systems in saddle point form. This is true whether in the continuous or the discrete setting, so saddle point systems arising from the discretization of partial differential equation problems, such as those describing electromagnetic problems or incompressible flow, lead to equations with this structure, as do, for example, interior point methods and the sequential quadratic programming approach to nonlinear optimization. This survey concerns iterative solution methods for these problems and, in particular, shows how the problem formulation leads to natural preconditioners which guarantee a fast rate of convergence of the relevant iterative methods. These preconditioners are related to the original extremum problem and their effectiveness - in terms of rapidity of convergence - is established here via a proof of general bounds on the eigenvalues of the preconditioned saddle point matrix on which iteration convergence depends.

  18. Genealogical series method. Hyperpolar points screen effect

    International Nuclear Information System (INIS)

    Gorbatov, A.M.

    1991-01-01

    The fundamental values of the genealogical series method -the genealogical integrals (sandwiches) have been investigated. The hyperpolar points screen effect has been found. It allows one to calculate the sandwiches for the Fermion systems with large number of particles and to ascertain the validity of the iterated-potential method as well. For the first time the genealogical-series method has been realized numerically for the central spin-independent potential

  19. The Purification Method of Matching Points Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    DONG Yang

    2017-02-01

    Full Text Available The traditional purification method of matching points usually uses a small number of the points as initial input. Though it can meet most of the requirements of point constraints, the iterative purification solution is easy to fall into local extreme, which results in the missing of correct matching points. To solve this problem, we introduce the principal component analysis method to use the whole point set as initial input. And thorough mismatching points step eliminating and robust solving, more accurate global optimal solution, which intends to reduce the omission rate of correct matching points and thus reaches better purification effect, can be obtained. Experimental results show that this method can obtain the global optimal solution under a certain original false matching rate, and can decrease or avoid the omission of correct matching points.

  20. Triangulated Proxy Reporting: a technique for improving how communication partners come to know people with severe cognitive impairment.

    Science.gov (United States)

    Lyons, Gordon; De Bortoli, Tania; Arthur-Kelly, Michael

    2017-09-01

    This paper explains and demonstrates the pilot application of Triangulated Proxy Reporting (TPR); a practical technique for enhancing communication around people who have severe cognitive impairment (SCI). An introduction explains SCI and how this impacts on communication; and consequently on quality of care and quality of life. This is followed by an explanation of TPR and its origins in triangulation research techniques. An illustrative vignette explicates its utility and value in a group home for a resident with profound multiple disabilities. The Discussion and Conclusion sections propose the wider application of TPR for different cohorts of people with SCIs, their communication partners and service providers. TPR presents as a practical technique for enhancing communication interactions with people who have SCI. The paper demonstrates the potential of the technique for improving engagement amongst those with profound multiple disabilities, severe acquired brain injury and advanced dementia and their partners in and across different care settings. Implications for Rehabilitation Triangulated Proxy Reporting (TPR) shows potential to improve communications between people with severe cognitive impairments and their communication partners. TPR can lead to improved quality of care and quality of life for people with profound multiple disabilities, very advanced dementia and severe acquired brain injury, who otherwise are very difficult to support. TPR is a relatively simple and inexpensive technique that service providers can incorporate into practice to improving communications between clients with severe cognitive impairments, their carers and other support professionals.

  1. Automated matching of corresponding seed images of three simulator radiographs to allow 3D triangulation of implanted seeds

    Science.gov (United States)

    Altschuler, Martin D.; Kassaee, Alireza

    1997-02-01

    To match corresponding seed images in different radiographs so that the 3D seed locations can be triangulated automatically and without ambiguity requires (at least) three radiographs taken from different perspectives, and an algorithm that finds the proper permutations of the seed-image indices. Matching corresponding images in only two radiographs introduces inherent ambiguities which can be resolved only with the use of non-positional information obtained with intensive human effort. Matching images in three or more radiographs is an `NP (Non-determinant in Polynomial time)-complete' problem. Although the matching problem is fundamental, current methods for three-radiograph seed-image matching use `local' (seed-by-seed) methods that may lead to incorrect matchings. We describe a permutation-sampling method which not only gives good `global' (full permutation) matches for the NP-complete three-radiograph seed-matching problem, but also determines the reliability of the radiographic data themselves, namely, whether the patient moved in the interval between radiographic perspectives.

  2. Restrictions on Measurement of Roughness of Textile Fabrics by Laser Triangulation: A Phenomenological Approach

    International Nuclear Information System (INIS)

    Berberi, Pellumb; Tabaku, Burhan

    2010-01-01

    Laser triangulation method is one of the methods used for contactless measurement of roughness of textile fabrics. Method is based on measurement of distance between the sensor and the object by imaging the light scattered from the surface. However, experimental results, especially for high values of roughness, show a strong dependence to duration of exposure time to laser pulses. Use of very short exposure times and long exposures times causes appearance on the surface of the scanned textile of pixels with Active peak heights. The number of Active peaks increases with decrease of exposure time down to 0.1 ms, and increases with increase of exposure time up to 100 ms. Appearance of Active peaks leads to nonrealistic increase of roughness of the surface both for short exposure times and long exposure times reaching a minimum somewhere in the region of medium exposure times, 1 to 2 ms. The above effect suggests a careful analysis of experimental data and, also, becomes an important restriction to the method. In this paper we attempt to make a phenomenological approach to the mechanisms leading to these effects. We suppose that effect is related both to scattering properties of scanned surface and to physical parameters of CCD sensors. The first factor becomes more important in the region of long exposure times, while second factor becomes more important in the region of short exposure times.

  3. Source splitting via the point source method

    International Nuclear Information System (INIS)

    Potthast, Roland; Fazi, Filippo M; Nelson, Philip A

    2010-01-01

    We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields u j , j = 1, ..., n of n element of N sound sources supported in different bounded domains G 1 , ..., G n in R 3 from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u 1 + ... + u n on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions g 1 ,…, g n , n element of N, to construct u l for l = 1, ..., n from u| Λ in the form u l (x) = ∫ Λ g l,x (y)u(y)ds(y), l=1,... n. (1) We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online

  4. Method to Minimize the Low-Frequency Neutral-Point Voltage Oscillations With Time-Offset Injection for Neutral-Point-Clamped Inverters

    DEFF Research Database (Denmark)

    Choi, Ui-Min; Blaabjerg, Frede; Lee, Kyo-Beum

    2015-01-01

    time of small- and medium-voltage vectors. However, if the power factor is lower, there is a limitation to eliminate neutral-point oscillations. In this case, the proposed method can be improved by changing the switching sequence properly. Additionally, a method for neutral-point voltage balancing......This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time offset to the three-phase turn-on times. The proper time offset is simply calculated considering the phase currents and dwell...

  5. Material-point Method Analysis of Bending in Elastic Beams

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    The aim of this paper is to test different types of spatial interpolation for the materialpoint method. The interpolations include quadratic elements and cubic splines. A brief introduction to the material-point method is given. Simple liner-elastic problems are tested, including the classical...... cantilevered beam problem. As shown in the paper, the use of negative shape functions is not consistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations of field quantities. It is shown...

  6. Taylor's series method for solving the nonlinear point kinetics equations

    International Nuclear Information System (INIS)

    Nahla, Abdallah A.

    2011-01-01

    Highlights: → Taylor's series method for nonlinear point kinetics equations is applied. → The general order of derivatives are derived for this system. → Stability of Taylor's series method is studied. → Taylor's series method is A-stable for negative reactivity. → Taylor's series method is an accurate computational technique. - Abstract: Taylor's series method for solving the point reactor kinetics equations with multi-group of delayed neutrons in the presence of Newtonian temperature feedback reactivity is applied and programmed by FORTRAN. This system is the couples of the stiff nonlinear ordinary differential equations. This numerical method is based on the different order derivatives of the neutron density, the precursor concentrations of i-group of delayed neutrons and the reactivity. The r th order of derivatives are derived. The stability of Taylor's series method is discussed. Three sets of applications: step, ramp and temperature feedback reactivities are computed. Taylor's series method is an accurate computational technique and stable for negative step, negative ramp and temperature feedback reactivities. This method is useful than the traditional methods for solving the nonlinear point kinetics equations.

  7. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method.

    Science.gov (United States)

    Shen, Yueqian; Lindenbergh, Roderik; Wang, Jinhu

    2016-12-24

    A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  8. Change Analysis in Structural Laser Scanning Point Clouds: The Baseline Method

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2016-12-01

    Full Text Available A method is introduced for detecting changes from point clouds that avoids registration. For many applications, changes are detected between two scans of the same scene obtained at different times. Traditionally, these scans are aligned to a common coordinate system having the disadvantage that this registration step introduces additional errors. In addition, registration requires stable targets or features. To avoid these issues, we propose a change detection method based on so-called baselines. Baselines connect feature points within one scan. To analyze changes, baselines connecting corresponding points in two scans are compared. As feature points either targets or virtual points corresponding to some reconstructable feature in the scene are used. The new method is implemented on two scans sampling a masonry laboratory building before and after seismic testing, that resulted in damages in the order of several centimeters. The centres of the bricks of the laboratory building are automatically extracted to serve as virtual points. Baselines connecting virtual points and/or target points are extracted and compared with respect to a suitable structural coordinate system. Changes detected from the baseline analysis are compared to a traditional cloud to cloud change analysis demonstrating the potential of the new method for structural analysis.

  9. New methods of subcooled water recognition in dew point hygrometers

    Science.gov (United States)

    Weremczuk, Jerzy; Jachowicz, Ryszard

    2001-08-01

    Two new methods of sub-cooled water recognition in dew point hygrometers are presented in this paper. The first one- impedance method use a new semiconductor mirror in which the dew point detector, the thermometer and the heaters were integrated all together. The second one an optical method based on a multi-section optical detector is discussed in the report. Experimental results of both methods are shown. New types of dew pont hydrometers of ability to recognized sub-cooled water were proposed.

  10. Word Length Selection Method for Controller Implementation on FPGAs Using the VHDL-2008 Fixed-Point and Floating-Point Packages

    Directory of Open Access Journals (Sweden)

    Urriza I

    2010-01-01

    Full Text Available Abstract This paper presents a word length selection method for the implementation of digital controllers in both fixed-point and floating-point hardware on FPGAs. This method uses the new types defined in the VHDL-2008 fixed-point and floating-point packages. These packages allow customizing the word length of fixed and floating point representations and shorten the design cycle simplifying the design of arithmetic operations. The method performs bit-true simulations in order to determine the word length to represent the constant coefficients and the internal signals of the digital controller while maintaining the control system specifications. A mixed-signal simulation tool is used to simulate the closed loop system as a whole in order to analyze the impact of the quantization effects and loop delays on the control system performance. The method is applied to implement a digital controller for a switching power converter. The digital circuit is implemented on an FPGA, and the simulations are experimentally verified.

  11. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  12. A Fast Multi-layer Subnetwork Connection Method for Time Series InSAR Technique

    Directory of Open Access Journals (Sweden)

    WU Hong'an

    2016-10-01

    Full Text Available Nowadays, times series interferometric synthetic aperture radar (InSAR technique has been widely used in ground deformation monitoring, especially in urban areas where lots of stable point targets can be detected. However, in standard time series InSAR technique, affected by atmospheric correlation distance and the threshold of linear model coherence, the Delaunay triangulation for connecting point targets can be easily separated into many discontinuous subnetworks. Thus it is difficult to retrieve ground deformation in non-urban areas. In order to monitor ground deformation in large areas efficiently, a novel multi-layer subnetwork connection (MLSC method is proposed for connecting all subnetworks. The advantage of the method is that it can quickly reduce the number of subnetworks with valid edges layer-by-layer. This method is compared with the existing complex network connecting mehod. The experimental results demonstrate that the data processing time of the proposed method is only 32.56% of the latter one.

  13. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  14. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  15. Reconstruction of measurable three-dimensional point cloud model based on large-scene archaeological excavation sites

    Science.gov (United States)

    Zhang, Chun-Sen; Zhang, Meng-Meng; Zhang, Wei-Xing

    2017-01-01

    This paper outlines a low-cost, user-friendly photogrammetric technique with nonmetric cameras to obtain excavation site digital sequence images, based on photogrammetry and computer vision. Digital camera calibration, automatic aerial triangulation, image feature extraction, image sequence matching, and dense digital differential rectification are used, combined with a certain number of global control points of the excavation site, to reconstruct the high precision of measured three-dimensional (3-D) models. Using the acrobatic figurines in the Qin Shi Huang mausoleum excavation as an example, our method solves the problems of little base-to-height ratio, high inclination, unstable altitudes, and significant ground elevation changes affecting image matching. Compared to 3-D laser scanning, the 3-D color point cloud obtained by this method can maintain the same visual result and has advantages of low project cost, simple data processing, and high accuracy. Structure-from-motion (SfM) is often used to reconstruct 3-D models of large scenes and has lower accuracy if it is a reconstructed 3-D model of a small scene at close range. Results indicate that this method quickly achieves 3-D reconstruction of large archaeological sites and produces heritage site distribution of orthophotos providing a scientific basis for accurate location of cultural relics, archaeological excavations, investigation, and site protection planning. This proposed method has a comprehensive application value.

  16. Interior Point Methods for Large-Scale Nonlinear Programming

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2005-01-01

    Roč. 20, č. 4-5 (2005), s. 569-582 ISSN 1055-6788 R&D Projects: GA AV ČR IAA1030405 Institutional research plan: CEZ:AV0Z10300504 Keywords : nonlinear programming * interior point methods * KKT systems * indefinite preconditioners * filter methods * algorithms Subject RIV: BA - General Mathematics Impact factor: 0.477, year: 2005

  17. Health, utilisation of health services, 'core' information, and reasons for non-participation: a triangulation study amongst non-respondents.

    Science.gov (United States)

    Näslindh-Ylispangar, Anita; Sihvonen, Marja; Kekki, Pertti

    2008-11-01

    To explore health, use of health services, 'core' information and reasons for non-participation amongst males. Gender may provide an explanation for non-participation in the healthcare system. A growing body of research suggests that males are less likely than females to seek help from health professionals for their problems. The current research had its beginnings with the low response rate in a prior voluntary survey and health examination for Finnish males born in 1961. Data triangulation among 28 non-respondent middle-aged males in Helsinki was used. The methods involved structured and in-depth interviews and health measurements to explore the views of these males concerning their health-related behaviours and use of health services. Non-respondent males seldom used healthcare services. Despite clinical risk factors (e.g. obesity and blood pressure) and various symptoms, males perceived their health status as good. Work was widely experienced as excessively demanding, causing insomnia and other stress symptoms. Males expressed sensitive messages when a session was ending and when the participant was close to the door and leaving the room. This 'core' information included major causes of concern, anxiety, fears and loneliness. This triangulation study showed that by using an in-depth interview as one research strategy, more sensitive 'feminist' expressions in health and ill-health were got by men. The results emphasise a male's self-perception of his masculinity that may have relevance to the health experience of the male population. Nurses and physicians need to pay special attention to the requirements of gender-specific healthcare to be most effective in the delivery of healthcare to males.

  18. Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Yu, Y. H.

    2012-05-01

    During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.

  19. Novel TPPO Based Maximum Power Point Method for Photovoltaic System

    Directory of Open Access Journals (Sweden)

    ABBASI, M. A.

    2017-08-01

    Full Text Available Photovoltaic (PV system has a great potential and it is installed more when compared with other renewable energy sources nowadays. However, the PV system cannot perform optimally due to its solid reliance on climate conditions. Due to this dependency, PV system does not operate at its maximum power point (MPP. Many MPP tracking methods have been proposed for this purpose. One of these is the Perturb and Observe Method (P&O which is the most famous due to its simplicity, less cost and fast track. But it deviates from MPP in continuously changing weather conditions, especially in rapidly changing irradiance conditions. A new Maximum Power Point Tracking (MPPT method, Tetra Point Perturb and Observe (TPPO, has been proposed to improve PV system performance in changing irradiance conditions and the effects on characteristic curves of PV array module due to varying irradiance are delineated. The Proposed MPPT method has shown better results in increasing the efficiency of a PV system.

  20. Evaluation of the H-point standard additions method (HPSAM) and the generalized H-point standard additions method (GHPSAM) for the UV-analysis of two-component mixtures.

    Science.gov (United States)

    Hund, E; Massart, D L; Smeyers-Verbeke, J

    1999-10-01

    The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.

  1. A multi points ultrasonic detection method for material flow of belt conveyor

    Science.gov (United States)

    Zhang, Li; He, Rongjun

    2018-03-01

    For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.

  2. Bloch Modes and Evanescent Modes of Photonic Crystals: Weak Form Solutions Based on Accurate Interface Triangulation

    Directory of Open Access Journals (Sweden)

    Matthias Saba

    2015-01-01

    Full Text Available We propose a new approach to calculate the complex photonic band structure, both purely dispersive and evanescent Bloch modes of a finite range, of arbitrary three-dimensional photonic crystals. Our method, based on a well-established plane wave expansion and the weak form solution of Maxwell’s equations, computes the Fourier components of periodic structures composed of distinct homogeneous material domains from a triangulated mesh representation of the inter-material interfaces; this allows substantially more accurate representations of the geometry of complex photonic crystals than the conventional representation by a cubic voxel grid. Our method works for general two-phase composite materials, consisting of bi-anisotropic materials with tensor-valued dielectric and magnetic permittivities ε and μ and coupling matrices ς. We demonstrate for the Bragg mirror and a simple cubic crystal closely related to the Kelvin foam that relatively small numbers of Fourier components are sufficient to yield good convergence of the eigenvalues, making this method viable, despite its computational complexity. As an application, we use the single gyroid crystal to demonstrate that the consideration of both conventional and evanescent Bloch modes is necessary to predict the key features of the reflectance spectrum by analysis of the band structure, in particular for light incident along the cubic [111] direction.

  3. New method of three-dimensional reconstruction from two-dimensional MR data sets

    International Nuclear Information System (INIS)

    Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.

    1989-01-01

    In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations

  4. Near-point string: Simple method to demonstrate anticipated near point for multifocal and accommodating intraocular lenses.

    Science.gov (United States)

    George, Monica C; Lazer, Zane P; George, David S

    2016-05-01

    We present a technique that uses a near-point string to demonstrate the anticipated near point of multifocal and accommodating intraocular lenses (IOLs). Beads are placed on the string at distances corresponding to the near points for diffractive and accommodating IOLs. The string is held up to the patient's eye to demonstrate where each of the IOLs is likely to provide the best near vision. None of the authors has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  5. A Review on the Modified Finite Point Method

    Directory of Open Access Journals (Sweden)

    Nan-Jing Wu

    2014-01-01

    Full Text Available The objective of this paper is to make a review on recent advancements of the modified finite point method, named MFPM hereafter. This MFPM method is developed for solving general partial differential equations. Benchmark examples of employing this method to solve Laplace, Poisson, convection-diffusion, Helmholtz, mild-slope, and extended mild-slope equations are verified and then illustrated in fluid flow problems. Application of MFPM to numerical generation of orthogonal grids, which is governed by Laplace equation, is also demonstrated.

  6. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  7. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  8. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  9. Thermodynamic free energy methods to investigate shape transitions in bilayer membranes.

    Science.gov (United States)

    Ramakrishnan, N; Tourdot, Richard W; Radhakrishnan, Ravi

    2016-06-01

    The conformational free energy landscape of a system is a fundamental thermodynamic quantity of importance particularly in the study of soft matter and biological systems, in which the entropic contributions play a dominant role. While computational methods to delineate the free energy landscape are routinely used to analyze the relative stability of conformational states, to determine phase boundaries, and to compute ligand-receptor binding energies its use in problems involving the cell membrane is limited. Here, we present an overview of four different free energy methods to study morphological transitions in bilayer membranes, induced either by the action of curvature remodeling proteins or due to the application of external forces. Using a triangulated surface as a model for the cell membrane and using the framework of dynamical triangulation Monte Carlo, we have focused on the methods of Widom insertion, thermodynamic integration, Bennett acceptance scheme, and umbrella sampling and weighted histogram analysis. We have demonstrated how these methods can be employed in a variety of problems involving the cell membrane. Specifically, we have shown that the chemical potential, computed using Widom insertion, and the relative free energies, computed using thermodynamic integration and Bennett acceptance method, are excellent measures to study the transition from curvature sensing to curvature inducing behavior of membrane associated proteins. The umbrella sampling and WHAM analysis has been used to study the thermodynamics of tether formation in cell membranes and the quantitative predictions of the computational model are in excellent agreement with experimental measurements. Furthermore, we also present a method based on WHAM and thermodynamic integration to handle problems related to end-point-catastrophe that are common in most free energy methods.

  10. Big Bang as a Critical Point

    Directory of Open Access Journals (Sweden)

    Jakub Mielczarek

    2017-01-01

    Full Text Available This article addresses the issue of possible gravitational phase transitions in the early universe. We suggest that a second-order phase transition observed in the Causal Dynamical Triangulations approach to quantum gravity may have a cosmological relevance. The phase transition interpolates between a nongeometric crumpled phase of gravity and an extended phase with classical properties. Transition of this kind has been postulated earlier in the context of geometrogenesis in the Quantum Graphity approach to quantum gravity. We show that critical behavior may also be associated with a signature change in Loop Quantum Cosmology, which occurs as a result of quantum deformation of the hypersurface deformation algebra. In the considered cases, classical space-time originates at the critical point associated with a second-order phase transition. Relation between the gravitational phase transitions and the corresponding change of symmetry is underlined.

  11. Development of the delyed-neutron triangulation technique for locating failed fuel in LMFBR

    International Nuclear Information System (INIS)

    Kryter, R.C.

    1975-01-01

    Two major accomplishments of the ORNL delayed neutron triangulation program are (1) an analysis of anticipated detector counting rates and sensitivities to unclad fuel and erosion types of pin failure, and (2) an experimental assessment of the accuracy with which the position of failed fuel can be determined in the FFTF (this was performed in a quarter-scale water mockup of realistic outlet plenum geometry using electrolyte injections and conductivity cells to simulate delayed-neutron precursor releases and detections, respectively). The major results and conclusions from these studies are presented, along with plans for further DNT development work at ORNL for the FFTF and CRBR. (author)

  12. The structure of chromatic polynomials of planar triangulations and implications for chromatic zeros and asymptotic limiting quantities

    International Nuclear Information System (INIS)

    Shrock, Robert; Xu Yan

    2012-01-01

    We present an analysis of the structure and properties of chromatic polynomials P(G pt,m-vector, q) of one-parameter and multi-parameter families of planar triangulation graphs G pt,m-vector , where m-vector = (m 1 ,…,m p ) is a vector of integer parameters. We use these to study the ratio of |P(G pt,m-vector, τ+1)| to the Tutte upper bound (τ − 1) n−5 , where τ=(1+√5)/2 and n is the number of vertices in G pt,m-vector . In particular, we calculate limiting values of this ratio as n → ∞ for various families of planar triangulations. We also use our calculations to analyze zeros of these chromatic polynomials. We study a large class of families G pt,m-vector with p = 1 and p = 2 and show that these have a structure of the form P(G pt,m ,q) = c G pt ,1 λ 1 m + c G pt ,2 λ 2 m + c G pt ,3 λ 3 m for p = 1, where λ 1 = q − 2, λ 2 = q − 3, and λ 3 = −1, and P(G pt,m-vector ,q) =Σ i 1 =1 3 Σ i 2 =1 3 c G pt ,i 1 i 2 λ i 1 m 1 λ i 2 m 2 for p = 2. We derive properties of the coefficients c G pt ,i-vector and show that P(G pt,m-vector ,q) has a real chromatic zero that approaches (1/2)(3+√5) as one or more of the m i → ∞. The generalization to p ⩾ 3 is given. Further, we present a one-parameter family of planar triangulations with real zeros that approach 3 from below as m → ∞. Implications for the ground-state entropy of the Potts antiferromagnet are discussed. (paper)

  13. Assessment of behavioral changes associated with oral meloxicam administration at time of dehorning in calves using a remote triangulation device and accelerometers

    Directory of Open Access Journals (Sweden)

    Theurer Miles E

    2012-04-01

    Full Text Available Abstract Background Dehorning is common in the cattle industry, and there is a need for research evaluating pain mitigation techniques. The objective of this study was to determine the effects of oral meloxicam, a non-steroidal anti-inflammatory, on cattle behavior post-dehorning by monitoring the percent of time spent standing, walking, and lying in specific locations within the pen using accelerometers and a remote triangulation device. Twelve calves approximately ten weeks of age were randomized into 2 treatment groups (meloxicam or control in a complete block design by body weight. Six calves were orally administered 0.5 mg/kg meloxicam at the time of dehorning and six calves served as negative controls. All calves were dehorned using thermocautery and behavior of each calf was continuously monitored for 7 days after dehorning using accelerometers and a remote triangulation device. Accelerometers monitored lying behavior and the remote triangulation device was used to monitor each calf’s movement within the pen. Results Analysis of behavioral data revealed significant interactions between treatment (meloxicam vs. control and the number of days post dehorning. Calves that received meloxicam spent more time at the grain bunk on trial days 2 and 6 post-dehorning; spent more time lying down on days 1, 2, 3, and 4; and less time at the hay feeder on days 0 and 1 compared to the control group. Meloxicam calves tended to walk more at the beginning and end of the trial compared to the control group. By day 5, the meloxicam and control group exhibited similar behaviors. Conclusions The noted behavioral changes provide evidence of differences associated with meloxicam administration. More studies need to be performed to evaluate the relationship of behavior monitoring and post-operative pain. To our knowledge this is the first published report demonstrating behavioral changes following dehorning using a remote triangulation device in conjunction

  14. TRIANGULATION OF THE INTERSTELLAR MAGNETIC FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Schwadron, N. A.; Moebius, E. [University of New Hampshire, Durham, NH 03824 (United States); Richardson, J. D. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Burlaga, L. F. [Goddard Space Flight Center, Greenbelt, MD 20771 (United States); McComas, D. J. [Southwest Research Institute, San Antonio, TX 78228 (United States)

    2015-11-01

    Determining the direction of the local interstellar magnetic field (LISMF) is important for understanding the heliosphere’s global structure, the properties of the interstellar medium, and the propagation of cosmic rays in the local galactic medium. Measurements of interstellar neutral atoms by Ulysses for He and by SOHO/SWAN for H provided some of the first observational insights into the LISMF direction. Because secondary neutral H is partially deflected by the interstellar flow in the outer heliosheath and this deflection is influenced by the LISMF, the relative deflection of H versus He provides a plane—the so-called B–V plane in which the LISMF direction should lie. Interstellar Boundary Explorer (IBEX) subsequently discovered a ribbon, the center of which is conjectured to be the LISMF direction. The most recent He velocity measurements from IBEX and those from Ulysses yield a B–V plane with uncertainty limits that contain the centers of the IBEX ribbon at 0.7–2.7 keV. The possibility that Voyager 1 has moved into the outer heliosheath now suggests that Voyager 1's direct observations provide another independent determination of the LISMF. We show that LISMF direction measured by Voyager 1 is >40° off from the IBEX ribbon center and the B–V plane. Taking into account the temporal gradient of the field direction measured by Voyager 1, we extrapolate to a field direction that passes directly through the IBEX ribbon center (0.7–2.7 keV) and the B–V plane, allowing us to triangulate the LISMF direction and estimate the gradient scale size of the magnetic field.

  15. Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Yuan Gao

    2014-01-01

    Full Text Available By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.

  16. Accuracy of multi-point boundary crossing time analysis

    Directory of Open Access Journals (Sweden)

    J. Vogt

    2011-12-01

    Full Text Available Recent multi-spacecraft studies of solar wind discontinuity crossings using the timing (boundary plane triangulation method gave boundary parameter estimates that are significantly different from those of the well-established single-spacecraft minimum variance analysis (MVA technique. A large survey of directional discontinuities in Cluster data turned out to be particularly inconsistent in the sense that multi-point timing analyses did not identify any rotational discontinuities (RDs whereas the MVA results of the individual spacecraft suggested that RDs form the majority of events. To make multi-spacecraft studies of discontinuity crossings more conclusive, the present report addresses the accuracy of the timing approach to boundary parameter estimation. Our error analysis is based on the reciprocal vector formalism and takes into account uncertainties both in crossing times and in the spacecraft positions. A rigorous error estimation scheme is presented for the general case of correlated crossing time errors and arbitrary spacecraft configurations. Crossing time error covariances are determined through cross correlation analyses of the residuals. The principal influence of the spacecraft array geometry on the accuracy of the timing method is illustrated using error formulas for the simplified case of mutually uncorrelated and identical errors at different spacecraft. The full error analysis procedure is demonstrated for a solar wind discontinuity as observed by the Cluster FGM instrument.

  17. Theory, Method, and Triangulation in the Study of Street Children.

    Science.gov (United States)

    Lucchini, Riccardo

    1996-01-01

    Describes how a comparative study of street children in Montevideo (Uruguay), Rio de Janeiro, and Mexico City contributes to a synergism between theory and method. Notes how theoretical approaches of symbolic interactionism, genetic structuralism, and habitus theory complement interview, participant observation, and content analysis methods;…

  18. AN IMPROVEMENT ON GEOMETRY-BASED METHODS FOR GENERATION OF NETWORK PATHS FROM POINTS

    Directory of Open Access Journals (Sweden)

    Z. Akbari

    2014-10-01

    Full Text Available Determining network path is important for different purposes such as determination of road traffic, the average speed of vehicles, and other network analysis. One of the required input data is information about network path. Nevertheless, the data collected by the positioning systems often lead to the discrete points. Conversion of these points to the network path have become one of the challenges which different researchers, presents many ways for solving it. This study aims at investigating geometry-based methods to estimate the network paths from the obtained points and improve an existing point to curve method. To this end, some geometry-based methods have been studied and an improved method has been proposed by applying conditions on the best method after describing and illustrating weaknesses of them.

  19. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  20. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  1. Analysis and research on Maximum Power Point Tracking of Photovoltaic Array with Fuzzy Logic Control and Three-point Weight Comparison Method

    Institute of Scientific and Technical Information of China (English)

    LIN; Kuang-Jang; LIN; Chii-Ruey

    2010-01-01

    The Photovoltaic Array has a best optimal operating point where the array operating can obtain the maximum power.However, the optimal operating point can be compromised by the strength of solar radiation,angle,and by the change of environment and load.Due to the constant changes in these conditions,it has become very difficult to locate the optimal operating point by following a mathematical model.Therefore,this study will focus mostly on the application of Fuzzy Logic Control theory and Three-point Weight Comparison Method in effort to locate the optimal operating point of solar panel and achieve maximum efficiency in power generation. The Three-point Weight Comparison Method is the comparison between the characteristic curves of the voltage of photovoltaic array and output power;it is a rather simple way to track the maximum power.The Fuzzy Logic Control,on the other hand,can be used to solve problems that cannot be effectively dealt with by calculation rules,such as concepts,contemplation, deductive reasoning,and identification.Therefore,this paper uses these two kinds of methods to make simulation successively. The simulation results show that,the Three-point Comparison Method is more effective under the environment with more frequent change of solar radiation;however,the Fuzzy Logic Control has better tacking efficiency under the environment with violent change of solar radiation.

  2. Moments, Mixed Methods, and Paradigm Dialogs

    Science.gov (United States)

    Denzin, Norman K.

    2010-01-01

    I reread the 50-year-old history of the qualitative inquiry that calls for triangulation and mixed methods. I briefly visit the disputes within the mixed methods community asking how did we get to where we are today, the period of mixed-multiple-methods advocacy, and Teddlie and Tashakkori's third methodological moment. (Contains 10 notes.)

  3. Primal-Dual Interior Point Multigrid Method for Topology Optimization

    Czech Academy of Sciences Publication Activity Database

    Kočvara, Michal; Mohammed, S.

    2016-01-01

    Roč. 38, č. 5 (2016), B685-B709 ISSN 1064-8275 Grant - others:European Commission - EC(XE) 313781 Institutional support: RVO:67985556 Keywords : topology optimization * multigrid method s * interior point method Subject RIV: BA - General Mathematics Impact factor: 2.195, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/kocvara-0462418.pdf

  4. A Classification-oriented Method of Feature Image Generation for Vehicle-borne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    YANG Bisheng

    2016-02-01

    Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.

  5. Summing Feynman graphs by Monte Carlo: Planar φ3-theory and dynamically triangulated random surfaces

    International Nuclear Information System (INIS)

    Boulatov, D.V.

    1988-01-01

    New combinatorial identities are suggested relating the ratio of (n-1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γ str (string susceptibility) in planar φ 3 -theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D=1 the exact critical properties of the theory are reproduced numerically. (orig.)

  6. Methods for registration laser scanner point clouds in forest stands

    International Nuclear Information System (INIS)

    Bienert, A.; Pech, K.; Maas, H.-G.

    2011-01-01

    Laser scanning is a fast and efficient 3-D measurement technique to capture surface points describing the geometry of a complex object in an accurate and reliable way. Besides airborne laser scanning, terrestrial laser scanning finds growing interest for forestry applications. These two different recording platforms show large differences in resolution, recording area and scan viewing direction. Using both datasets for a combined point cloud analysis may yield advantages because of their largely complementary information. In this paper, methods will be presented to automatically register airborne and terrestrial laser scanner point clouds of a forest stand. In a first step, tree detection is performed in both datasets in an automatic manner. In a second step, corresponding tree positions are determined using RANSAC. Finally, the geometric transformation is performed, divided in a coarse and fine registration. After a coarse registration, the fine registration is done in an iterative manner (ICP) using the point clouds itself. The methods are tested and validated with a dataset of a forest stand. The presented registration results provide accuracies which fulfill the forestry requirements [de

  7. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    International Nuclear Information System (INIS)

    Xia, Donghui; Huang, Mei; Wang, Zhijiang; Zhang, Feng; Zhuang, Ge

    2016-01-01

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  8. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)

    2016-10-15

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  9. Evaluation of the point-centred-quarter method of sampling ...

    African Journals Online (AJOL)

    -quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...

  10. The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces

    KAUST Repository

    Chen, Yujia

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general curved surfaces. Based on the closest point representation of the underlying surface, we formulate an embedding equation for the surface elliptic problem, then discretize it using standard finite differences and interpolation schemes on banded but uniform Cartesian grids. We prove the convergence of the difference scheme for the Poisson\\'s equation on a smooth closed curve. In order to solve the resulting large sparse linear systems, we propose a specific geometric multigrid method in the setting of the closest point method. Convergence studies in both the accuracy of the difference scheme and the speed of the multigrid algorithm show that our approaches are effective.

  11. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    Science.gov (United States)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  12. Gender preference between traditional and PowerPoint methods of teaching gross anatomy.

    Science.gov (United States)

    Nuhu, Saleh; Adamu, Lawan Hassan; Buba, Mohammed Alhaji; Garba, Sani Hyedima; Dalori, Babagana Mohammed; Yusuf, Ashiru Hassan

    2018-01-01

    Teaching and learning process is increasingly metamorphosing from the traditional chalk and talk to the modern dynamism in the information and communication technology. Medical education is no exception to this dynamism more especially in the teaching of gross anatomy, which serves as one of the bases of understanding the human structure. This study was conducted to determine the gender preference of preclinical medical students on the use of traditional (chalk and talk) and PowerPoint presentation in the teaching of gross anatomy. This was cross-sectional and prospective study, which was conducted among preclinical medical students in the University of Maiduguri, Nigeria. Using simple random techniques, a questionnaire was circulated among 280 medical students, where 247 students filled the questionnaire appropriately. The data obtained was analyzed using SPSS version 20 (IBM Corporation, Armonk, NY, USA) to find the method preferred by the students among other things. Majority of the preclinical medical students in the University of Maiduguri preferred PowerPoint method in the teaching of gross anatomy over the conventional methods. The Cronbach alpha value of 0.76 was obtained which is an acceptable level of internal consistency. A statistically significant association was found between gender and preferred method of lecture delivery on the clarity of lecture content where females prefer the conventional method of lecture delivery whereas males prefer the PowerPoint method, On the reproducibility of text and diagram, females prefer PowerPoint method of teaching gross anatomy while males prefer the conventional method of teaching gross anatomy. There are gender preferences with regard to clarity of lecture contents and reproducibility of text and diagram. It was also revealed from this study that majority of the preclinical medical students in the University of Maiduguri prefer PowerPoint presentation over the traditional chalk and talk method in most of the

  13. A Reference Point Construction Method Using Mobile Terminals and the Indoor Localization Evaluation in the Centroid Method

    Directory of Open Access Journals (Sweden)

    Takahiro Yamaguchi

    2015-05-01

    Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.

  14. Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points

    Directory of Open Access Journals (Sweden)

    Qiang Hou

    2018-01-01

    Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.

  15. An Efficient Mesh Generation Method for Fractured Network System Based on Dynamic Grid Deformation

    Directory of Open Access Journals (Sweden)

    Shuli Sun

    2013-01-01

    Full Text Available Meshing quality of the discrete model influences the accuracy, convergence, and efficiency of the solution for fractured network system in geological problem. However, modeling and meshing of such a fractured network system are usually tedious and difficult due to geometric complexity of the computational domain induced by existence and extension of fractures. The traditional meshing method to deal with fractures usually involves boundary recovery operation based on topological transformation, which relies on many complicated techniques and skills. This paper presents an alternative and efficient approach for meshing fractured network system. The method firstly presets points on fractures and then performs Delaunay triangulation to obtain preliminary mesh by point-by-point centroid insertion algorithm. Then the fractures are exactly recovered by local correction with revised dynamic grid deformation approach. Smoothing algorithm is finally applied to improve the quality of mesh. The proposed approach is efficient, easy to implement, and applicable to the cases of initial existing fractures and extension of fractures. The method is successfully applied to modeling of two- and three-dimensional discrete fractured network (DFN system in geological problems to demonstrate its effectiveness and high efficiency.

  16. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  17. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  18. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  19. Comparative analysis among several methods used to solve the point kinetic equations

    International Nuclear Information System (INIS)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da

    2007-01-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  20. Comparative analysis among several methods used to solve the point kinetic equations

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br

    2007-07-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  1. Using Photogrammetric UAV Measurements as Support for Classical Topographical Measurements in Order to Obtain the Topographic Plan for Urban Areas

    Directory of Open Access Journals (Sweden)

    Elemer Emanuel SUBA

    2017-11-01

    Full Text Available This article aims to highlight the benefits of UAV photogrammetric measurements in addition to classical ones. It will also deal with the processing and integration of the point cloud, respectively the digital elevation model in topo-cadastral works. The main purpose of this paper is to compare the results obtained using the UAV photogrammetric measurements with the results obtained by classical methods. It will briefly present the classical measurements made with the total station. In the present project, the closed-circuit traverse and the supported on the endings traverse were made using known coordinate points. Determining the coordinates of the points used for the traverses was done by GNSS methods. The area on which the measurements were made is 67942m2 and is covered by 31 determined station points. From these points, 13 were used as ground control points, respectively components of the aero-triangulation network and 17 points were used to control the obtained results by comparing their coordinates obtained by classical methods with those obtained by the UAV photogrammetric method. It was intended that the constraint points of the aero triangulation to be uniformly distributed on the studied surface.

  2. Interior-Point Method for Non-Linear Non-Convex Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2004-01-01

    Roč. 11, č. 5-6 (2004), s. 431-453 ISSN 1070-5325 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : non-linear programming * interior point methods * indefinite systems * indefinite preconditioners * preconditioned conjugate gradient method * merit functions * algorithms * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.727, year: 2004

  3. Primal Interior-Point Method for Large Sparse Minimax Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 45, č. 5 (2009), s. 841-864 ISSN 0023-5954 R&D Projects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * minimax optimization * nonsmooth optimization * interior-point methods * modified Newton methods * variable metric methods * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.445, year: 2009 http://dml.cz/handle/10338.dmlcz/140034

  4. Branches of Triangulated Origami Near the Unfolded State

    Directory of Open Access Journals (Sweden)

    Bryan Gin-ge Chen

    2018-02-01

    Full Text Available Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct “branches” which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either “pop up” or “pop down.” The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a “misfolded” state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  5. Branches of Triangulated Origami Near the Unfolded State

    Science.gov (United States)

    Chen, Bryan Gin-ge; Santangelo, Christian D.

    2018-01-01

    Origami structures are characterized by a network of folds and vertices joining unbendable plates. For applications to mechanical design and self-folding structures, it is essential to understand the interplay between the set of folds in the unfolded origami and the possible 3D folded configurations. When deforming a structure that has been folded, one can often linearize the geometric constraints, but the degeneracy of the unfolded state makes a linear approach impossible there. We derive a theory for the second-order infinitesimal rigidity of an initially unfolded triangulated origami structure and use it to study the set of nearly unfolded configurations of origami with four boundary vertices. We find that locally, this set consists of a number of distinct "branches" which intersect at the unfolded state, and that the number of these branches is exponential in the number of vertices. We find numerical and analytical evidence that suggests that the branches are characterized by choosing each internal vertex to either "pop up" or "pop down." The large number of pathways along which one can fold an initially unfolded origami structure strongly indicates that a generic structure is likely to become trapped in a "misfolded" state. Thus, new techniques for creating self-folding origami are likely necessary; controlling the popping state of the vertices may be one possibility.

  6. Rainfall Deduction Method for Estimating Non-Point Source Pollution Load for Watershed

    OpenAIRE

    Cai, Ming; Li, Huai-en; KAWAKAMI, Yoji

    2004-01-01

    The water pollution can be divided into point source pollution (PSP) and non-point source pollution (NSP). Since the point source pollution has been controlled, the non-point source pollution is becoming the main pollution source. The prediction of NSP load is being increasingly important in water pollution controlling and planning in watershed. Considering the monitoring data shortage of NPS in China, a practical estimation method of non-point source pollution load --- rainfall deduction met...

  7. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  8. Revisiting Individual Creativity Assessment: Triangulation in Subjective and Objective Assessment Methods

    Science.gov (United States)

    Park, Namgyoo K.; Chun, Monica Youngshin; Lee, Jinju

    2016-01-01

    Compared to the significant development of creativity studies, individual creativity research has not reached a meaningful consensus regarding the most valid and reliable method for assessing individual creativity. This study revisited 2 of the most popular methods for assessing individual creativity: subjective and objective methods. This study…

  9. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  10. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  11. Method to minimize the low-frequency neutral-point voltage oscillations with time-offset injection for neutral-point-clamped inverters

    DEFF Research Database (Denmark)

    Choi, Uimin; Lee, Kyo-Beum; Blaabjerg, Frede

    2013-01-01

    This paper proposes a method to reduce the low-frequency neutral-point voltage oscillations. The neutral-point voltage oscillations are considerably reduced by adding a time-offset to the three phase turn-on times. The proper time-offset is simply calculated considering the phase currents and dwell...

  12. Zur Rekonstruktion einer Typologie jugendlichen Medienhandelns gemäß dem Leitbild der Triangulation

    Directory of Open Access Journals (Sweden)

    Klaus Peter Treumann

    2017-09-01

    Full Text Available Die im Folgenden dargestellten Ergebnisse sind im Rahmen des von der DFG geförderten Forschungsprojekts „Eine Untersuchung zum Mediennutzungsverhalten 12- bis 20-Jähriger und zur Entwicklung von Medienkompetenz im Jugendalter“ entstanden, das gemeinsam von Klaus Peter Treumann, Uwe Sander und Dorothee Meister geleitet wird. Das Forschungsprojekt untersucht das Medienhandeln Jugendlicher sowohl hinsichtlich Neuer als auch alter Medien. Zum einen fragen wir dabei nach den Ausprägungen von Medienkompetenz in verschiedenen Dimensionen und zum anderen konzentrieren wir uns auf die Entwicklung einer empirisch fundierten Typologie jugendlichen Medienhandelns. Methodologisch ist die Untersuchung an dem Leitbild der Triangulation orientiert und kombiniert qualitative und quantitative Zugänge zum Forschungsfeld in Form von Gruppendiskussionen, leitfadengestützten Einzelinterviews und einer Repräsentativerhebung.

  13. Two-point method uncertainty during control and measurement of cylindrical element diameters

    Science.gov (United States)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  14. Interior Point Methods on GPU with application to Model Predictive Control

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog

    The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...

  15. El uso de la triangulación en un estudio de detección de necesidades de formación permanente en profesorado no universitario de la Comunidad de Madrid. Using triangulation to assess continuing education teacher needs in Madrid (Spain

    Directory of Open Access Journals (Sweden)

    Coral González

    2009-01-01

    Full Text Available El presente artículo pretende destacar la importancia de la triangulación como elemento o herramienta para comparar y validar informaciones obtenidas mediante diferentes fuentes y métodos. Para ello se apoya en los resultados de un estudio realizado en la Comunidad de Madrid con el fin de determinar las necesidades que el profesorado manifiesta con respecto a la oferta de formación permanente que se les ofrece en la actualidad. Dichos resultados son producto de la utilización de diferentes modos de recogida de información así como de diferentes técnicas de análisis de datos, hecho que los dota de mayor complejidad y riqueza. Partiendo de una breve introducción sobre la técnica de triangulación, se presentan los métodos, fuentes y análisis de datos llevados a cabo junto a los resultados y las conclusiones principales del estudio. This article aims at highlighting the importance of triangulation as tool to compare and validate information coming from different sources and procedures. To do so, we assessed the needs for in-service training demanded by teachers and offered by the educational administration in Madrid (Spain. The data was collected using different techniques and analyzed with different data-analysis method and from this combination the results are richer and more complex. Starting with a short introduction about triangulation, we present methods, sources and analysis of the data as well main results and conclusions obtained via triangulation.

  16. a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree

    Science.gov (United States)

    Kang, Q.; Huang, G.; Yang, S.

    2018-04-01

    Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  17. A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE

    Directory of Open Access Journals (Sweden)

    Q. Kang

    2018-04-01

    Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.

  18. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    Science.gov (United States)

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  19. Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method)

    DEFF Research Database (Denmark)

    Hansen, Susanne Brunsgaard; Berg, Rolf W.; Stenby, Erling Halfdan

    Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf......Detection of Dew-Point by substantial Raman Band Frequency Jumps (A new Method). See poster at http://www.kemi.dtu.dk/~ajo/rolf/jumps.pdf...

  20. The Closest Point Method and Multigrid Solvers for Elliptic Equations on Surfaces

    KAUST Repository

    Chen, Yujia; Macdonald, Colin B.

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Elliptic partial differential equations are important from both application and analysis points of view. In this paper we apply the closest point method to solve elliptic equations on general

  1. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  2. New methods to interpolate large volume of data from points or particles (Mesh-Free) methods application for its scientific visualization

    International Nuclear Information System (INIS)

    Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.

    2009-01-01

    In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs

  3. Barriers to energy efficiency in shipping: A triangulated approach to investigate the principal agent problem

    International Nuclear Information System (INIS)

    Rehmatulla, Nishatabbas; Smith, Tristan

    2015-01-01

    Energy efficiency is a key policy strategy to meet some of the challenges being faced today and to plan for a sustainable future. Numerous empirical studies in various sectors suggest that there are cost-effective measures that are available but not always implemented due to existence of barriers to energy efficiency. Several cost-effective energy efficient options (technologies for new and existing ships and operations) have also been identified for improving energy efficiency of ships. This paper is one of the first to empirically investigate barriers to energy efficiency in the shipping industry using a novel framework and multidisciplinary methods to gauge implementation of cost-effective measures, perception on barriers and observations of barriers. It draws on findings of a survey conducted of shipping companies, content analysis of shipping contracts and analysis of energy efficiency data. Initial results from these methods suggest the existence of the principal agent problem and other market failures and barriers that have also been suggested in other sectors and industries. Given this finding, policies to improve implementation of energy efficiency in shipping need to be carefully considered to improve their efficacy and avoid unintended consequences. -- Highlights: •We provide the first analysis of the principal agent problem in shipping. •We develop a framework that incorporates methodological triangulation. •Our results show the extent to which this barrier is observed and perceived. •The presence of the barrier has implications on the policy most suited to shipping

  4. Tle Triangulation Campaign by Japanese High School Students as a Space Educational Project of the Ssh Consortium Kochi

    Science.gov (United States)

    Yamamoto, Masa-Yuki; Okamoto, Sumito; Miyoshi, Terunori; Takamura, Yuzaburo; Aoshima, Akira; Hinokuchi, Jin

    As one of the space educational projects in Japan, a triangulation observation project of TLE (Transient Luminous Events: sprites, elves, blue-jets, etc.) has been carried out since 2006 in collaboration between 29 Super Science High-schools (SSH) and Kochi University of Technol-ogy (KUT). Following with previous success of sprite observations by "Astro High-school" since 2004, the SSH consortium Kochi was established as a national space educational project sup-ported by Japan Science and Technology Agency (JST). High-sensitivity CCD camera (Watec, Neptune-100) with 6 mm F/1.4 C-mount lens (Fujinon) and motion-detective software (UFO-Capture, SonotaCo) were given to each participating team in order to monitor Northern night sky of Japan with almost full-coverage. During each school year (from April to March in Japan) since 2006, thousands of TLE images were taken by many student teams, with considerably large numbers of successful triangulations, i.e., (School year, Numbers of TLE observations, Numbers of triangulations) are (2006, 43, 3), (2007, 441, 95), (2008, 734, 115), and (2009, 337, 78). Note that, school year in Japan begins on April 1 and ends on March 31. The observation campaign began in December 2006, numbers are as of Feb. 28, 2010. Recently, some high schools started wide field observations using multiple cameras, and others started VLF observations using handmade loop antennae and amplifiers. Infomation exchange among the SSH consortium Kochi is frequently communicated with scientific discussion via KUT's mailing lists. Also, interactions with amateur observers in Japan are made through an internet forum of "SonotaCo Network Japan" (http://sonotaco.jp). Not only as an educational project but also as a scientific one, the project is also in success. In February 2008, simultaneous observations of Elves were obtained, in November 2009 a Giant "Graft-shaped" Sprites driven by Jets was clearly imaged with VLF signals. Most recently, ob-servations of Elves

  5. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  6. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  7. Source parameters for the 1952 Kern County earthquake, California: A joint inversion of leveling and triangulation observations

    OpenAIRE

    Bawden, Gerald W.

    2001-01-01

    Coseismic leveling and triangulation observations are used to determine the faulting geometry and slip distribution of the July 21, 1952, Mw 7.3 Kern County earthquake on the White Wolf fault. A singular value decomposition inversion is used to assess the ability of the geodetic network to resolve slip along a multisegment fault and shows that the network is sufficient to resolve slip along the surface rupture to a depth of 10 km. Below 10 km, the network can only resolve dip slip near the fa...

  8. Five-point form of the nodal diffusion method and comparison with finite-difference

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1988-01-01

    Nodal Methods have been derived, implemented and numerically tested for several problems in physics and engineering. In the field of nuclear engineering, many nodal formalisms have been used for the neutron diffusion equation, all yielding results which were far more computationally efficient than conventional Finite Difference (FD) and Finite Element (FE) methods. However, not much effort has been devoted to theoretically comparing nodal and FD methods in order to explain the very high accuracy of the former. In this summary we outline the derivation of a simple five-point form for the lowest order nodal method and compare it to the traditional five-point, edge-centered FD scheme. The effect of the observed differences on the accuracy of the respective methods is established by considering a simple test problem. It must be emphasized that the nodal five-point scheme derived here is mathematically equivalent to previously derived lowest order nodal methods. 7 refs., 1 tab

  9. A Multi-Point Method Considering the Maximum Power Point Tracking Dynamic Process for Aerodynamic Optimization of Variable-Speed Wind Turbine Blades

    Directory of Open Access Journals (Sweden)

    Zhiqiang Yang

    2016-05-01

    Full Text Available Due to the dynamic process of maximum power point tracking (MPPT caused by turbulence and large rotor inertia, variable-speed wind turbines (VSWTs cannot maintain the optimal tip speed ratio (TSR from cut-in wind speed up to the rated speed. Therefore, in order to increase the total captured wind energy, the existing aerodynamic design for VSWT blades, which only focuses on performance improvement at a single TSR, needs to be improved to a multi-point design. In this paper, based on a closed-loop system of VSWTs, including turbulent wind, rotor, drive train and MPPT controller, the distribution of operational TSR and its description based on inflow wind energy are investigated. Moreover, a multi-point method considering the MPPT dynamic process for the aerodynamic optimization of VSWT blades is proposed. In the proposed method, the distribution of operational TSR is obtained through a dynamic simulation of the closed-loop system under a specific turbulent wind, and accordingly the multiple design TSRs and the corresponding weighting coefficients in the objective function are determined. Finally, using the blade of a National Renewable Energy Laboratory (NREL 1.5 MW wind turbine as the baseline, the proposed method is compared with the conventional single-point optimization method using the commercial software Bladed. Simulation results verify the effectiveness of the proposed method.

  10. Primal Interior Point Method for Minimization of Generalized Minimax Functions

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2010-01-01

    Roč. 46, č. 4 (2010), s. 697-721 ISSN 0023-5954 R&D Projects: GA ČR GA201/09/1957 Institutional research plan: CEZ:AV0Z10300504 Keywords : unconstrained optimization * large-scale optimization * nonsmooth optimization * generalized minimax optimization * interior-point methods * modified Newton methods * variable metric methods * global convergence * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.461, year: 2010 http://dml.cz/handle/10338.dmlcz/140779

  11. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  12. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  13. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  14. Stakeholder management in the local government decision-making area: evidences from a triangulation study with the English local government

    Directory of Open Access Journals (Sweden)

    Ricardo Corrêa Gomes

    2006-01-01

    Full Text Available The stakeholder theory has been in the management agenda for about thirty years and reservations about its acceptance as a comprehensive theory still remains. It was introduced as a managerial issue by the Labour Party in 1997 aiming to make public management more inclusive. This article aims to contribute to the stakeholder theory adding descriptive issues to its theoretical basis. The findings are derived from an inductive investigationcarried out with English Local Authorities, which will most likely be reproduced in other contexts. Data collection and analysis is based on a data triangulation method that involves case-studies, interviews of validation and analysis of documents. The investigation proposes a model for representing the nature of therelationships between stakeholders and the decision-making process of such organizations. The decision-making of local government organizations is in fact a stakeholder-based process in which stakeholders are empowered to exert influences due to power over and interest in the organization’s operations and outcomes.

  15. PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method

    International Nuclear Information System (INIS)

    Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua

    1990-01-01

    1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant

  16. Optical profilometer using laser based conical triangulation for inspection of inner geometry of corroded pipes in cylindrical coordinates

    Science.gov (United States)

    Buschinelli, Pedro D. V.; Melo, João. Ricardo C.; Albertazzi, Armando; Santos, João. M. C.; Camerini, Claudio S.

    2013-04-01

    An axis-symmetrical optical laser triangulation system was developed by the authors to measure the inner geometry of long pipes used in the oil industry. It has a special optical configuration able to acquire shape information of the inner geometry of a section of a pipe from a single image frame. A collimated laser beam is pointed to the tip of a 45° conical mirror. The laser light is reflected in such a way that a radial light sheet is formed and intercepts the inner geometry and forms a bright laser line on a section of the inspected pipe. A camera acquires the image of the laser line through a wide angle lens. An odometer-based triggering system is used to shot the camera to acquire a set of equally spaced images at high speed while the device is moved along the pipe's axis. Image processing is done in real-time (between images acquisitions) thanks to the use of parallel computing technology. The measured geometry is analyzed to identify corrosion damages. The measured geometry and results are graphically presented using virtual reality techniques and devices as 3D glasses and head-mounted displays. The paper describes the measurement principles, calibration strategies, laboratory evaluation of the developed device, as well as, a practical example of a corroded pipe used in an industrial gas production plant.

  17. Methods of fast, multiple-point in vivo T1 determination

    International Nuclear Information System (INIS)

    Zhang, Y.; Spigarelli, M.; Fencil, L.E.; Yeung, H.N.

    1989-01-01

    Two methods of rapid, multiple-point determination of T1 in vivo have been evaluated with a phantom consisting of vials of gel in different Mn + + concentrations. The first method was an inversion-recovery- on-the-fly technique, and the second method used a variable- tip-angle (α) progressive saturation with two sub- sequences of different repetition times. In the first method, 1/T1 was evaluated by an exponential fit. In the second method, 1/T1 was obtained iteratively with a linear fit and then readjusted together with α to a model equation until self-consistency was reached

  18. A new method to identify the location of the kick point during the golf swing.

    Science.gov (United States)

    Joyce, Christopher; Burnett, Angus; Matthews, Miccal

    2013-12-01

    No method currently exists to determine the location of the kick point during the golf swing. This study consisted of two phases. In the first phase, the static kick point of 10 drivers (having identical grip and head but fitted with shafts of differing mass and stiffness) was determined by two methods: (1) a visual method used by professional club fitters and (2) an algorithm using 3D locations of markers positioned on the golf club. Using level of agreement statistics, we showed the latter technique was a valid method to determine the location of the static kick point. In phase two, the validated method was used to determine the dynamic kick point during the golf swing. Twelve elite male golfers had three shots analyzed for two drivers fitted with stiff shafts of differing mass (56 g and 78 g). Excellent between-trial reliability was found for dynamic kick point location. Differences were found for dynamic kick point location when compared with static kick point location, as well as between-shaft and within-shaft. These findings have implications for future investigations examining the bending behavior of golf clubs, as well as being useful to examine relationships between properties of the shaft and launch parameters.

  19. Statistical methods for change-point detection in surface temperature records

    Science.gov (United States)

    Pintar, A. L.; Possolo, A.; Zhang, N. F.

    2013-09-01

    We describe several statistical methods to detect possible change-points in a time series of values of surface temperature measured at a meteorological station, and to assess the statistical significance of such changes, taking into account the natural variability of the measured values, and the autocorrelations between them. These methods serve to determine whether the record may suffer from biases unrelated to the climate signal, hence whether there may be a need for adjustments as considered by M. J. Menne and C. N. Williams (2009) "Homogenization of Temperature Series via Pairwise Comparisons", Journal of Climate 22 (7), 1700-1717. We also review methods to characterize patterns of seasonality (seasonal decomposition using monthly medians or robust local regression), and explain the role they play in the imputation of missing values, and in enabling robust decompositions of the measured values into a seasonal component, a possible climate signal, and a station-specific remainder. The methods for change-point detection that we describe include statistical process control, wavelet multi-resolution analysis, adaptive weights smoothing, and a Bayesian procedure, all of which are applicable to single station records.

  20. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Science.gov (United States)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  1. Multiscale Modeling using Molecular Dynamics and Dual Domain Material Point Method

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Theoretical Division. Fluid Dynamics and Solid Mechanics Group, T-3; Rice Univ., Houston, TX (United States)

    2016-07-07

    For problems involving large material deformation rate, the material deformation time scale can be shorter than the material takes to reach a thermodynamical equilibrium. For such problems, it is difficult to obtain a constitutive relation. History dependency become important because of thermodynamic non-equilibrium. Our goal is to build a multi-scale numerical method which can bypass the need for a constitutive relation. In conclusion, multi-scale simulation method is developed based on the dual domain material point (DDMP). Molecular dynamics (MD) simulation is performed to calculate stress. Since the communication among material points is not necessary, the computation can be done embarrassingly parallel in CPU-GPU platform.

  2. Coordinate alignment of combined measurement systems using a modified common points method

    Science.gov (United States)

    Zhao, G.; Zhang, P.; Xiao, W.

    2018-03-01

    The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.

  3. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  4. Triangulation and Gender Perspectives in ‘Falling Man’ by Don DeLillo

    Directory of Open Access Journals (Sweden)

    Noemi Abe

    2011-09-01

    Susannah Radstone argues that the rhetorical response to 9/11 by the Bush administration is based on the opposition of two father figures: “the 'chastened' but powerful 'good' patriarchal father” Vs. “the 'bad' archaic father”. She explains: “In this Manichean fantasy can be glimpsed the continuing battle between competing versions of masculinity” (2002:459 that leaves women on the margins. The battle of the fathers of Bush’s rhetoric is counterposed in Falling Man by a battle between two men that stands for an unaccomplished fatherhood. Furthermore, the dualistic vision engendered by post-9/11 rhetoric and reflected in the novel should be evaluated in a trilateral dimension, given that at its core lies a triangulation built upon three stereotypical representations: the white middle-class man; the Arab terrorist; and a composite character in the middle, the woman, who shifts from ally, to victim, to a plausible supporter of the enemy.

  5. Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud

    Science.gov (United States)

    Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.

    2018-04-01

    In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.

  6. A portable low-cost 3D point cloud acquiring method based on structure light

    Science.gov (United States)

    Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia

    2018-03-01

    A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.

  7. A primal-dual interior point method for large-scale free material optimization

    DEFF Research Database (Denmark)

    Weldeyesus, Alemseged Gebrehiwot; Stolpe, Mathias

    2015-01-01

    Free Material Optimization (FMO) is a branch of structural optimization in which the design variable is the elastic material tensor that is allowed to vary over the design domain. The requirements are that the material tensor is symmetric positive semidefinite with bounded trace. The resulting...... optimization problem is a nonlinear semidefinite program with many small matrix inequalities for which a special-purpose optimization method should be developed. The objective of this article is to propose an efficient primal-dual interior point method for FMO that can robustly and accurately solve large...... of iterations the interior point method requires is modest and increases only marginally with problem size. The computed optimal solutions obtain a higher precision than other available special-purpose methods for FMO. The efficiency and robustness of the method is demonstrated by numerical experiments on a set...

  8. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.; Lazarov, R. D.; Thomé e, V.

    2012-01-01

    for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods

  9. SOFTWARE MODULE FOR CONSTRUCTING THE INTERSECTION OF TRIANGULATED SURFACES

    Directory of Open Access Journals (Sweden)

    Vladimir V. Kurgansky

    2018-03-01

    Full Text Available The effective algorithm is proposed for implementing Boolean operations over triangulated surfaces, namely, disjunction, conjunction and Boolean difference, and its software implementation. The idea consists in as follow. The first step is to determine pairs of intersecting triangles: localizing the intersection of the two surfaces using the bounding volume of the parallelepipeds and the future of their intersection. The second step is constructing an intersection line for each pair of triangles: a pair of intersecting triangles is selected, and the segment along which they intersect is constructed. Further, thanks to the entered data structure, "adjacent" triangles are selected, among which are selected those that form the intersecting pair. The process described above continues as long as such triangles can be detected. After that the triangles involved in the intersection are retriangulated. For each triangle, all the edges are known on which it intersects with triangles from another surface. These edges are structural edges in the triangulation problem with constraints for a given triangle. The third step is to combine all surfaces into one surface. Further, subsurfaces are constructed along the loops of intersection limited by the found loops. Since the intersection line of the surfaces was constructed in sequence, it is possible to specify the direction of each edge. Any edge from the intersection line is selected. The triangle is added to the subsurface under construction, which includes this edge and its orientation is the same as the direction of the edge. The edge which was selected previously is deleted from intersection line, but two new edges are added is the remaining edges of added triangle. The third step is to combine all surfaces into one surface. Further, subsurfaces are constructed along the cycles of intersection limited by the found cycles. Since the intersection line of the surfaces was constructed in sequence, it is

  10. Kinetic and dynamic Delaunay tetrahedralizations in three dimensions

    Science.gov (United States)

    Schaller, Gernot; Meyer-Hermann, Michael

    2004-09-01

    We describe algorithms to implement fully dynamic and kinetic three-dimensional unconstrained Delaunay triangulations, where the time evolution of the triangulation is not only governed by moving vertices but also by a changing number of vertices. We use three-dimensional simplex flip algorithms, a stochastic visibility walk algorithm for point location and in addition, we propose a new simple method of deleting vertices from an existing three-dimensional Delaunay triangulation while maintaining the Delaunay property. As an example, we analyse the performance in various cases of practical relevance. The dual Dirichlet tessellation can be used to solve differential equations on an irregular grid, to define partitions in cell tissue simulations, for collision detection etc.

  11. The complexity of interior point methods for solving discounted turn-based stochastic games

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm; Ibsen-Jensen, Rasmus

    2013-01-01

    for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run...... states and discount factor γ we get κ=Θ(n(1−γ)2) , −δ=Θ(n√1−γ) , and 1/θ=Θ(n(1−γ)2) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games....

  12. A feature point identification method for positron emission particle tracking with multiple tracers

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Cody, E-mail: cwiggin2@vols.utk.edu [University of Tennessee-Knoxville, Department of Physics and Astronomy, 1408 Circle Drive, Knoxville, TN 37996 (United States); Santos, Roque [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States); Escuela Politécnica Nacional, Departamento de Ciencias Nucleares (Ecuador); Ruggles, Arthur [University of Tennessee-Knoxville, Department of Nuclear Engineering (United States)

    2017-01-21

    A novel detection algorithm for Positron Emission Particle Tracking (PEPT) with multiple tracers based on optical feature point identification (FPI) methods is presented. This new method, the FPI method, is compared to a previous multiple PEPT method via analyses of experimental and simulated data. The FPI method outperforms the older method in cases of large particle numbers and fine time resolution. Simulated data show the FPI method to be capable of identifying 100 particles at 0.5 mm average spatial error. Detection error is seen to vary with the inverse square root of the number of lines of response (LORs) used for detection and increases as particle separation decreases. - Highlights: • A new approach to positron emission particle tracking is presented. • Using optical feature point identification analogs, multiple particle tracking is achieved. • Method is compared to previous multiple particle method. • Accuracy and applicability of method is explored.

  13. Hybrid kriging methods for interpolating sparse river bathymetry point data

    Directory of Open Access Journals (Sweden)

    Pedro Velloso Gomes Batista

    Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

  14. Comparison On Matching Methods Used In Pose Tracking For 3D Shape Representation

    Directory of Open Access Journals (Sweden)

    Khin Kyu Kyu Win

    2017-01-01

    Full Text Available In this work three different algorithms such as Brute Force Delaunay Triangulation and k-d Tree are analyzed on matching comparison for 3D shape representation. It is intended for developing the pose tracking of moving objects in video surveillance. To determine 3D pose of moving objects some tracking system may require full 3D pose estimation of arbitrarily shaped objects in real time. In order to perform 3D pose estimation in real time each step in the tracking algorithm must be computationally efficient. This paper presents method comparison for the computationally efficient registration of 3D shapes including free-form surfaces. Matching of free-form surfaces are carried out by using geometric point matching algorithm ICP. Several aspects of the ICP algorithm are investigated and analyzed by using specified surface setup. The surface setup processed in this system is represented by simple geometric primitive dealing with objects of free-from shape. Considered representations are a cloud of points.

  15. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  16. RO1 Funding for Mixed Methods Research: Lessons learned from the Mixed-Method Analysis of Japanese Depression Project

    OpenAIRE

    Arnault, Denise Saint; Fetters, Michael D.

    2011-01-01

    Mixed methods research has made significant in-roads in the effort to examine complex health related phenomenon. However, little has been published on the funding of mixed methods research projects. This paper addresses that gap by presenting an example of an NIMH funded project using a mixed methods QUAL-QUAN triangulation design entitled “The Mixed-Method Analysis of Japanese Depression.” We present the Cultural Determinants of Health Seeking model that framed the study, the specific aims, ...

  17. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  18. Dual reference point temperature interrogating method for distributed temperature sensor

    International Nuclear Information System (INIS)

    Ma, Xin; Ju, Fang; Chang, Jun; Wang, Weijie; Wang, Zongliang

    2013-01-01

    A novel method based on dual temperature reference points is presented to interrogate the temperature in a distributed temperature sensing (DTS) system. This new method is suitable to overcome deficiencies due to the impact of DC offsets and the gain difference in the two signal channels of the sensing system during temperature interrogation. Moreover, this method can in most cases avoid the need to calibrate the gain and DC offsets in the receiver, data acquisition and conversion. An improved temperature interrogation formula is presented and the experimental results show that this method can efficiently estimate the channel amplification and system DC offset, thus improving the system accuracy. (letter)

  19. RESEARCH ON FEATURE POINTS EXTRACTION METHOD FOR BINARY MULTISCALE AND ROTATION INVARIANT LOCAL FEATURE DESCRIPTOR

    Directory of Open Access Journals (Sweden)

    Hongwei Ying

    2014-08-01

    Full Text Available An extreme point of scale space extraction method for binary multiscale and rotation invariant local feature descriptor is studied in this paper in order to obtain a robust and fast method for local image feature descriptor. Classic local feature description algorithms often select neighborhood information of feature points which are extremes of image scale space, obtained by constructing the image pyramid using certain signal transform method. But build the image pyramid always consumes a large amount of computing and storage resources, is not conducive to the actual applications development. This paper presents a dual multiscale FAST algorithm, it does not need to build the image pyramid, but can extract feature points of scale extreme quickly. Feature points extracted by proposed method have the characteristic of multiscale and rotation Invariant and are fit to construct the local feature descriptor.

  20. A Mixed Methods Portrait of Urban Instrumental Music Teaching

    Science.gov (United States)

    Fitzpatrick, Kate R.

    2011-01-01

    The purpose of this mixed methods study was to learn about the ways that instrumental music teachers in Chicago navigated the urban landscape. The design of the study most closely resembles Creswell and Plano Clark's (2007) two-part Triangulation Convergence Mixed Methods Design, with the addition of an initial exploratory focus group component.…

  1. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  2. Solution of Dendritic Growth in Steel by the Novel Point Automata Method

    International Nuclear Information System (INIS)

    Lorbiecka, A Z; Šarler, B

    2012-01-01

    The aim of this paper is the simulation of dendritic growth in steel in two dimensions by a coupled deterministic continuum mechanics heat and species transfer model and a stochastic localized phase change kinetics model taking into account the undercooling, curvature, kinetic, and thermodynamic anisotropy. The stochastic model receives temperature and concentration information from the deterministic model and the deterministic heat, and species diffusion equations receive the solid fraction information from the stochastic model. The heat and species transfer models are solved on a regular grid by the standard explicit Finite Difference Method (FDM). The phase-change kinetics model is solved by a novel Point Automata (PA) approach. The PA method was developed [1] in order to circumvent the mesh anisotropy problem, associated with the classical Cellular Automata (CA) method. The PA approach is established on randomly distributed points and neighbourhood configuration, similar as appears in meshless methods. A comparison of the PA and CA methods is shown. It is demonstrated that the results with the new PA method are not sensitive to the crystallographic orientations of the dendrite.

  3. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  4. An unsteady point vortex method for coupled fluid-solid problems

    Energy Technology Data Exchange (ETDEWEB)

    Michelin, Sebastien [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States); Ecole Nationale Superieure des Mines de Paris, Paris (France); Llewellyn Smith, Stefan G. [Jacobs School of Engineering, UCSD, Department of Mechanical and Aerospace Engineering, La Jolla, CA (United States)

    2009-06-15

    A method is proposed for the study of the two-dimensional coupled motion of a general sharp-edged solid body and a surrounding inviscid flow. The formation of vorticity at the body's edges is accounted for by the shedding at each corner of point vortices whose intensity is adjusted at each time step to satisfy the regularity condition on the flow at the generating corner. The irreversible nature of vortex shedding is included in the model by requiring the vortices' intensity to vary monotonically in time. A conservation of linear momentum argument is provided for the equation of motion of these point vortices (Brown-Michael equation). The forces and torques applied on the solid body are computed as explicit functions of the solid body velocity and the vortices' position and intensity, thereby providing an explicit formulation of the vortex-solid coupled problem as a set of non-linear ordinary differential equations. The example of a falling card in a fluid initially at rest is then studied using this method. The stability of broadside-on fall is analysed and the shedding of vorticity from both plate edges is shown to destabilize this position, consistent with experimental studies and numerical simulations of this problem. The reduced-order representation of the fluid motion in terms of point vortices is used to understand the physical origin of this destabilization. (orig.)

  5. A new integral method for solving the point reactor neutron kinetics equations

    International Nuclear Information System (INIS)

    Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian

    2009-01-01

    A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.

  6. Fixed Point Methods in the Stability of the Cauchy Functional Equations

    Directory of Open Access Journals (Sweden)

    Z. Dehvari

    2013-03-01

    Full Text Available By using the fixed point methods, we prove some generalized Hyers-Ulam stability of homomorphisms for Cauchy and CauchyJensen functional equations on the product algebras and on the triple systems.

  7. Limiting Accuracy of Segregated Solution Methods for Nonsymmetric Saddle Point Problems

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Rozložník, Miroslav

    Roc. 215, c. 1 (2008), s. 28-37 ISSN 0377-0427 R&D Projects: GA MŠk 1M0554; GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : saddle point problems * Schur complement reduction method * null-space projection method * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 1.048, year: 2008

  8. Invalid-point removal based on epipolar constraint in the structured-light method

    Science.gov (United States)

    Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin

    2018-06-01

    In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.

  9. Using the method of ideal point to solve dual-objective problem for production scheduling

    Directory of Open Access Journals (Sweden)

    Mariia Marko

    2016-07-01

    Full Text Available In practice, there are often problems, which must simultaneously optimize several criterias. This so-called multi-objective optimization problem. In the article we consider the use of the method ideal point to solve the two-objective optimization problem of production planning. The process of finding solution to the problem consists of a series of steps where using simplex method, we find the ideal point. After that for solving a scalar problems, we use the method of Lagrange multipliers

  10. Multiperiod hydrothermal economic dispatch by an interior point method

    Directory of Open Access Journals (Sweden)

    Kimball L. M.

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  11. The quadrant method measuring four points is as a reliable and accurate as the quadrant method in the evaluation after anatomical double-bundle ACL reconstruction.

    Science.gov (United States)

    Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro

    2017-11-20

    The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for

  12. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  13. Collective mass and zero-point energy in the generator-coordinate method

    International Nuclear Information System (INIS)

    Fiolhais, C.

    1982-01-01

    The aim of the present thesis if the study of the collective mass parameters and the zero-point energies in the GCM framework with special regards to the fission process. After the derivation of the collective Schroedinger equation in the framework of the Gaussian overlap approximation the inertia parameters are compared with those of the adiabatic time-dependent Hartree-Fock method. Then the kinetic and the potential zero-point energy occurring in this formulation are studied. Thereafter the practical application of the described formalism is discussed. Then a numerical calculation of the GCM mass parameter and the zero-point energy for the fission process on the base of a two-center shell model with a pairing force in the BCS approximation is presented. (HSI) [de

  14. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  15. Five-point Element Scheme of Finite Analytic Method for Unsteady Groundwater Flow

    Institute of Scientific and Technical Information of China (English)

    Xiang Bo; Mi Xiao; Ji Changming; Luo Qingsong

    2007-01-01

    In order to improve the finite analytic method's adaptability for irregular unit, by using coordinates rotation technique this paper establishes a five-point element scheme of finite analytic method. It not only solves unsteady groundwater flow equation but also gives the boundary condition. This method can be used to calculate the three typical questions of groundwater. By compared with predecessor's computed result, the result of this method is more satisfactory.

  16. Improved fixed point iterative method for blade element momentum computations

    DEFF Research Database (Denmark)

    Sun, Zhenye; Shen, Wen Zhong; Chen, Jin

    2017-01-01

    The blade element momentum (BEM) theory is widely used in aerodynamic performance calculations and optimization applications for wind turbines. The fixed point iterative method is the most commonly utilized technique to solve the BEM equations. However, this method sometimes does not converge...... are addressed through both theoretical analysis and numerical tests. A term from the BEM equations equals to zero at a critical inflow angle is the source of the convergence problems. When the initial inflow angle is set larger than the critical inflow angle and the relaxation methodology is adopted...

  17. Thermal Entanglement and Critical Behavior of Magnetic Properties on a Triangulated Kagomé Lattice

    Directory of Open Access Journals (Sweden)

    N. Ananikian

    2011-01-01

    Full Text Available The equilibrium magnetic and entanglement properties in a spin-1/2 Ising-Heisenberg model on a triangulated Kagomé lattice are analyzed by means of the effective field for the Gibbs-Bogoliubov inequality. The calculation is reduced to decoupled individual (clusters trimers due to the separable character of the Ising-type exchange interactions between the Heisenberg trimers. The concurrence in terms of the three qubit isotropic Heisenberg model in the effective Ising field in the absence of a magnetic field is non-zero. The magnetic and entanglement properties exhibit common (plateau, peak features driven by a magnetic field and (antiferromagnetic exchange interaction. The (quantum entangled and non-entangled phases can be exploited as a useful tool for signalling the quantum phase transitions and crossovers at finite temperatures. The critical temperature of order-disorder coincides with the threshold temperature of thermal entanglement.

  18. Material-Point-Method Analysis of Collapsing Slopes

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised-interpolation mat......To understand the dynamic evolution of landslides and predict their physical extent, a computational model is required that is capable of analysing complex material behaviour as well as large strains and deformations. Here, a model is presented based on the so-called generalised......, a deformed material description is introduced, based on time integration of the deformation gradient and utilising Gauss quadrature over the volume associated with each material point. The method has been implemented in a Fortran code and employed for the analysis of a landslide that took place during...

  19. A topology based approach to categorization of fingerprint images

    DEFF Research Database (Denmark)

    Aabrandt, A.; Olsen, M. A.; Busch, C.

    2012-01-01

    , an image is viewed as a triangulated point cloud and the topology associated with this construct is summarized using its first betti number - a number that indicates the number of distinct cycles in the triangulation associated to the particular image. This number is then compared against the first betti...... numbers of “n” prototype images in order to perform classification (“fingerprint” vs “non-fingerprint”). The proposed method is compared against SIVV (a tool provided by NIST). Experimental results on fingerprint and iris databases demonstrate the potential of the scheme....

  20. Methods and considerations to determine sphere center from terrestrial laser scanner point cloud data

    International Nuclear Information System (INIS)

    Rachakonda, Prem; Muralikrishnan, Bala; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cournoyer, Luc; Cheok, Geraldine

    2017-01-01

    The Dimensional Metrology Group at the National Institute of Standards and Technology is performing research to support the development of documentary standards within the ASTM E57 committee. This committee is addressing the point-to-point performance evaluation of a subclass of 3D imaging systems called terrestrial laser scanners (TLSs), which are laser-based and use a spherical coordinate system. This paper discusses the usage of sphere targets for this effort, and methods to minimize the errors due to the determination of their centers. The key contributions of this paper include methods to segment sphere data from a TLS point cloud, and the study of some of the factors that influence the determination of sphere centers. (paper)

  1. The Oil Point Method - A tool for indicative environmental evaluation in material and process selection

    DEFF Research Database (Denmark)

    Bey, Niki

    2000-01-01

    to three essential assessment steps, the method enables rough environmental evaluations and supports in this way material- and process-related decision-making in the early stages of design. In its overall structure, the Oil Point Method is related to Life Cycle Assessment - except for two main differences...... of environmental evaluation and only approximate information about the product and its life cycle. This dissertation addresses this challenge in presenting a method, which is tailored to these requirements of designers - the Oil Point Method (OPM). In providing environmental key information and confining itself...

  2. A complete solution of cartographic displacement based on elastic beams model and Delaunay triangulation

    Science.gov (United States)

    Liu, Y.; Guo, Q.; Sun, Y.

    2014-04-01

    In map production and generalization, it is inevitable to arise some spatial conflicts, but the detection and resolution of these spatial conflicts still requires manual operation. It is become a bottleneck hindering the development of automated cartographic generalization. Displacement is the most useful contextual operator that is often used for resolving the conflicts arising between two or more map objects. Automated generalization researches have reported many approaches of displacement including sequential approaches and optimization approaches. As an excellent optimization approach on the basis of energy minimization principles, elastic beams model has been used in resolving displacement problem of roads and buildings for several times. However, to realize a complete displacement solution, techniques of conflict detection and spatial context analysis should be also take into consideration. So we proposed a complete solution of displacement based on the combined use of elastic beams model and constrained Delaunay triangulation (CDT) in this paper. The solution designed as a cyclic and iterative process containing two phases: detection phase and displacement phase. In detection phase, CDT of map is use to detect proximity conflicts, identify spatial relationships and structures, and construct auxiliary structure, so as to support the displacement phase on the basis of elastic beams. In addition, for the improvements of displacement algorithm, a method for adaptive parameters setting and a new iterative strategy are put forward. Finally, we implemented our solution on a testing map generalization platform, and successfully tested it against 2 hand-generated test datasets of roads and buildings respectively.

  3. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  4. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  5. A RECOGNITION METHOD FOR AIRPLANE TARGETS USING 3D POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    M. Zhou

    2012-07-01

    Full Text Available LiDAR is capable of obtaining three dimension coordinates of the terrain and targets directly and is widely applied in digital city, emergent disaster mitigation and environment monitoring. Especially because of its ability of penetrating the low density vegetation and canopy, LiDAR technique has superior advantages in hidden and camouflaged targets detection and recognition. Based on the multi-echo data of LiDAR, and combining the invariant moment theory, this paper presents a recognition method for classic airplanes (even hidden targets mainly under the cover of canopy using KD-Tree segmented point cloud data. The proposed algorithm firstly uses KD-tree to organize and manage point cloud data, and makes use of the clustering method to segment objects, and then the prior knowledge and invariant recognition moment are utilized to recognise airplanes. The outcomes of this test verified the practicality and feasibility of the method derived in this paper. And these could be applied in target measuring and modelling of subsequent data processing.

  6. Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

    NARCIS (Netherlands)

    Asadi, A.R.; Roos, C.

    2015-01-01

    In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.

  7. Method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of gas

    Energy Technology Data Exchange (ETDEWEB)

    Boyle, G.J.; Pritchard, F.R.

    1987-08-04

    This patent describes a method and apparatus for continuously detecting and monitoring the hydrocarbon dew-point of a gas. A gas sample is supplied to a dew-point detector and the temperature of a portion of the sample gas stream to be investigated is lowered progressively prior to detection until the dew-point is reached. The presence of condensate within the flowing gas is detected and subsequently the supply gas sample is heated to above the dew-point. The procedure of cooling and heating the gas stream continuously in a cyclical manner is repeated.

  8. An adaptive Monte Carlo method under emission point as sampling station for deep penetration calculation

    International Nuclear Information System (INIS)

    Wang, Ruihong; Yang, Shulin; Pei, Lucheng

    2011-01-01

    Deep penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, an adaptive technique under the emission point as a sampling station is presented. The main advantage is to choose the most suitable sampling number from the emission point station to get the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is also derived. The main principle is to define the importance function of the response due to the particle state and ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive method under the emission point as a station could overcome the difficulty of underestimation to the result in some degree, and the related importance sampling method gets satisfied results as well. (author)

  9. Multiomics Data Triangulation for Asthma Candidate Biomarkers and Precision Medicine.

    Science.gov (United States)

    Pecak, Matija; Korošec, Peter; Kunej, Tanja

    2018-06-01

    Asthma is a common complex disorder and has been subject to intensive omics research for disease susceptibility and therapeutic innovation. Candidate biomarkers of asthma and its precision treatment demand that they stand the test of multiomics data triangulation before they can be prioritized for clinical applications. We classified the biomarkers of asthma after a search of the literature and based on whether or not a given biomarker candidate is reported in multiple omics platforms and methodologies, using PubMed and Web of Science, we identified omics studies of asthma conducted on diverse platforms using keywords, such as asthma, genomics, metabolomics, and epigenomics. We extracted data about asthma candidate biomarkers from 73 articles and developed a catalog of 190 potential asthma biomarkers (167 human, 23 animal data), comprising DNA loci, transcripts, proteins, metabolites, epimutations, and noncoding RNAs. The data were sorted according to 13 omics types: genomics, epigenomics, transcriptomics, proteomics, interactomics, metabolomics, ncRNAomics, glycomics, lipidomics, environmental omics, pharmacogenomics, phenomics, and integrative omics. Importantly, we found that 10 candidate biomarkers were apparent in at least two or more omics levels, thus promising potential for further biomarker research and development and precision medicine applications. This multiomics catalog reported herein for the first time contributes to future decision-making on prioritization of biomarkers and validation efforts for precision medicine in asthma. The findings may also facilitate meta-analyses and integrative omics studies in the future.

  10. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  11. Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method.

    Science.gov (United States)

    Lin, Ningning; Meng, Xiaofeng; Nie, Jing

    2016-11-18

    In this paper, the influence of temperature on quartz crystal microbalance (QCM) sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of -3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.

  12. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders.

    Science.gov (United States)

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  13. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-11-15

    Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

  14. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    Science.gov (United States)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  15. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  16. A continuous surface reconstruction method on point cloud captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan

    2015-11-01

    To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter

  17. Apparatus and method for implementing power saving techniques when processing floating point values

    Science.gov (United States)

    Kim, Young Moon; Park, Sang Phill

    2017-10-03

    An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.

  18. A comparative study of the maximum power point tracking methods for PV systems

    International Nuclear Information System (INIS)

    Liu, Yali; Li, Ming; Ji, Xu; Luo, Xi; Wang, Meidi; Zhang, Ying

    2014-01-01

    Highlights: • An improved maximum power point tracking method for PV system was proposed. • Theoretical derivation procedure of the proposed method was provided. • Simulation models of MPPT trackers were established based on MATLAB/Simulink. • Experiments were conducted to verify the effectiveness of the proposed MPPT method. - Abstract: Maximum power point tracking (MPPT) algorithms play an important role in the optimization of the power and efficiency of a photovoltaic (PV) generation system. According to the contradiction of the classical Perturb and Observe (P and Oa) method between the corresponding speed and the tracking accuracy on steady-state, an improved P and O (P and Ob) method has been put forward in this paper by using the Atken interpolation algorithm. To validate the correctness and performance of the proposed method, simulation and experimental study have been implemented. Simulation models of classical P and Oa method and improved P and Ob method have been established by MATLAB/Simulink to analyze each technique under varying solar irradiation and temperature. The experimental results show that the tracking efficiency of P and Ob method is an average of 93% compared to 72% for P and Oa method, this conclusion basically agree with the simulation study. Finally, we proposed the applicable conditions and scope of these MPPT methods in the practical application

  19. Time discretization of the point kinetic equations using matrix exponential method and First-Order Hold

    International Nuclear Information System (INIS)

    Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To

    2013-01-01

    Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and

  20. A Novel Complementary Method for the Point-Scan Nondestructive Tests Based on Lamb Waves

    Directory of Open Access Journals (Sweden)

    Rahim Gorgin

    2014-01-01

    Full Text Available This study presents a novel area-scan damage identification method based on Lamb waves which can be used as a complementary method for point-scan nondestructive techniques. The proposed technique is able to identify the most probable locations of damages prior to point-scan test which lead to decreasing the time and cost of inspection. The test-piece surface was partitioned with some smaller areas and the damage probability presence of each area was evaluated. A0 mode of Lamb wave was generated and collected using a mobile handmade transducer set at each area. Subsequently, a damage presence probability index (DPPI based on the energy of captured responses was defined for each area. The area with the highest DPPI value highlights the most probable locations of damages in test-piece. Point-scan nondestructive methods can then be used once these areas are found to identify the damage in detail. The approach was validated by predicting the most probable locations of representative damages including through-thickness hole and crack in aluminum plates. The obtained experimental results demonstrated the high potential of developed method in defining the most probable locations of damages in structures.

  1. The Oblique Basis Method from an Engineering Point of View

    International Nuclear Information System (INIS)

    Gueorguiev, V G

    2012-01-01

    The oblique basis method is reviewed from engineering point of view related to vibration and control theory. Examples are used to demonstrate and relate the oblique basis in nuclear physics to the equivalent mathematical problems in vibration theory. The mathematical techniques, such as principal coordinates and root locus, used by vibration and control theory engineers are shown to be relevant to the Richardson - Gaudin pairing-like problems in nuclear physics.

  2. Penyelesaian Numerik Persamaan Advection Dengan Radial Point Interpolation Method dan Integrasi Waktu Dengan Discontinuous Galerkin Method

    Directory of Open Access Journals (Sweden)

    Kresno Wikan Sadono

    2016-12-01

    Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM.  [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time

  3. Realist explanatory theory building method for social epidemiology: a protocol for a mixed method multilevel study of neighbourhood context and postnatal depression.

    Science.gov (United States)

    Eastwood, John G; Jalaludin, Bin B; Kemp, Lynn A

    2014-01-01

    A recent criticism of social epidemiological studies, and multi-level studies in particular has been a paucity of theory. We will present here the protocol for a study that aims to build a theory of the social epidemiology of maternal depression. We use a critical realist approach which is trans-disciplinary, encompassing both quantitative and qualitative traditions, and that assumes both ontological and hierarchical stratification of reality. We describe a critical realist Explanatory Theory Building Method comprising of an: 1) emergent phase, 2) construction phase, and 3) confirmatory phase. A concurrent triangulated mixed method multilevel cross-sectional study design is described. The Emergent Phase uses: interviews, focus groups, exploratory data analysis, exploratory factor analysis, regression, and multilevel Bayesian spatial data analysis to detect and describe phenomena. Abductive and retroductive reasoning will be applied to: categorical principal component analysis, exploratory factor analysis, regression, coding of concepts and categories, constant comparative analysis, drawing of conceptual networks, and situational analysis to generate theoretical concepts. The Theory Construction Phase will include: 1) defining stratified levels; 2) analytic resolution; 3) abductive reasoning; 4) comparative analysis (triangulation); 5) retroduction; 6) postulate and proposition development; 7) comparison and assessment of theories; and 8) conceptual frameworks and model development. The strength of the critical realist methodology described is the extent to which this paradigm is able to support the epistemological, ontological, axiological, methodological and rhetorical positions of both quantitative and qualitative research in the field of social epidemiology. The extensive multilevel Bayesian studies, intensive qualitative studies, latent variable theory, abductive triangulation, and Inference to Best Explanation provide a strong foundation for Theory

  4. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  5. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    Directory of Open Access Journals (Sweden)

    Zhiying Song

    2017-01-01

    Full Text Available The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS method and a dynamic threshold denoising (DTD method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933 on feature images and less Euclidean distance error (ED = 2.826 on landmark points, outperforming the source data (NC = −0.496, ED = 25.847 and the compared method (NC = −0.614, ED = 16.085. Moreover, our method is about ten times faster than the compared one.

  6. linear time algorithm for finding the convex ropes between two vertices of a simple polygon without triangulation

    International Nuclear Information System (INIS)

    Phan Thanh An

    2008-06-01

    The convex rope problem, posed by Peshkin and Sanderson in IEEE J. Robotics Automat, 2 (1986) pp. 53-58, is to find the counterclockwise and clockwise convex ropes starting at the vertex a and ending at the vertex b of a simple polygon, where a is on the boundary of the convex hull of the polygon and b is visible from infinity. In this paper, we present a linear time algorithm for solving this problem without resorting to a linear-time triangulation algorithm and without resorting to a convex hull algorithm for the polygon. The counterclockwise (clockwise, respectively) convex rope consists of two polylines obtained in a basic incremental strategy described in convex hull algorithms for the polylines forming the polygon from a to b. (author)

  7. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    Science.gov (United States)

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  8. The shooting method and multiple solutions of two/multi-point BVPs of second-order ODE

    Directory of Open Access Journals (Sweden)

    Man Kam Kwong

    2006-06-01

    Full Text Available Within the last decade, there has been growing interest in the study of multiple solutions of two- and multi-point boundary value problems of nonlinear ordinary differential equations as fixed points of a cone mapping. Undeniably many good results have emerged. The purpose of this paper is to point out that, in the special case of second-order equations, the shooting method can be an effective tool, sometimes yielding better results than those obtainable via fixed point techniques.

  9. Dew Point Calibration System Using a Quartz Crystal Sensor with a Differential Frequency Method

    Directory of Open Access Journals (Sweden)

    Ningning Lin

    2016-11-01

    Full Text Available In this paper, the influence of temperature on quartz crystal microbalance (QCM sensor response during dew point calibration is investigated. The aim is to present a compensation method to eliminate temperature impact on frequency acquisition. A new sensitive structure is proposed with double QCMs. One is kept in contact with the environment, whereas the other is not exposed to the atmosphere. There is a thermal conductivity silicone pad between each crystal and a refrigeration device to keep a uniform temperature condition. A differential frequency method is described in detail and is applied to calibrate the frequency characteristics of QCM at the dew point of −3.75 °C. It is worth noting that frequency changes of two QCMs were approximately opposite when temperature conditions were changed simultaneously. The results from continuous experiments show that the frequencies of two QCMs as the dew point moment was reached have strong consistency and high repeatability, leading to the conclusion that the sensitive structure can calibrate dew points with high reliability.

  10. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  11. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Directory of Open Access Journals (Sweden)

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  12. A New Iterative Method for Equilibrium Problems and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Abdul Latif

    2013-01-01

    Full Text Available Introducing a new iterative method, we study the existence of a common element of the set of solutions of equilibrium problems for a family of monotone, Lipschitz-type continuous mappings and the sets of fixed points of two nonexpansive semigroups in a real Hilbert space. We establish strong convergence theorems of the new iterative method for the solution of the variational inequality problem which is the optimality condition for the minimization problem. Our results improve and generalize the corresponding recent results of Anh (2012, Cianciaruso et al. (2010, and many others.

  13. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  14. Three-Dimensional TIN Algorithm for Digital Terrain Modeling%数字地形建模的真三维TIN算法研究

    Institute of Scientific and Technical Information of China (English)

    朱庆; 张叶廷; 李逢春

    2008-01-01

    The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is proposed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighboring triangle location method by making full use of the surface normal information. Experimental results prove that this algorithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automatically reconstructed surface has only small topological difference from the true surface. This algorithm has potential applications to virtual environments, computer vision, and so on.

  15. Phase-integral method allowing nearlying transition points

    CERN Document Server

    Fröman, Nanny

    1996-01-01

    The efficiency of the phase-integral method developed by the present au­ thors has been shown both analytically and numerically in many publica­ tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de­ scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...

  16. Zero point and zero suffix methods with robust ranking for solving fully fuzzy transportation problems

    Science.gov (United States)

    Ngastiti, P. T. B.; Surarso, Bayu; Sutimin

    2018-05-01

    Transportation issue of the distribution problem such as the commodity or goods from the supply tothe demmand is to minimize the transportation costs. Fuzzy transportation problem is an issue in which the transport costs, supply and demand are in the form of fuzzy quantities. Inthe case study at CV. Bintang Anugerah Elektrik, a company engages in the manufacture of gensets that has more than one distributors. We use the methods of zero point and zero suffix to investigate the transportation minimum cost. In implementing both methods, we use robust ranking techniques for the defuzzification process. The studyresult show that the iteration of zero suffix method is less than that of zero point method.

  17. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  18. On the Convergence of the Iteration Sequence in Primal-Dual Interior-Point Methods

    National Research Council Canada - National Science Library

    Tapia, Richard A; Zhang, Yin; Ye, Yinyu

    1993-01-01

    Recently, numerous research efforts, most of them concerned with superlinear convergence of the duality gap sequence to zero in the Kojima-Mizuno-Yoshise primal-dual interior-point method for linear...

  19. Domain Discretization and Circle Packings

    DEFF Research Database (Denmark)

    Dias, Kealey

    A circle packing is a configuration of circles which are tangent with one another in a prescribed pattern determined by a combinatorial triangulation, where the configuration fills a planar domain or a two-dimensional surface. The vertices in the triangulation correspond to centers of circles...... to domain discretization problems such as triangulation and unstructured mesh generation techniques. We wish to ask ourselves the question: given a cloud of points in the plane (we restrict ourselves to planar domains), is it possible to construct a circle packing preserving the positions of the vertices...... and constrained meshes having predefined vertices as constraints. A standard method of two-dimensional mesh generation involves conformal mapping of the surface or domain to standardized shapes, such as a disk. Since circle packing is a new technique for constructing discrete conformal mappings, it is possible...

  20. A New Approximation Method for Solving Variational Inequalities and Fixed Points of Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Klin-eam Chakkrid

    2009-01-01

    Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.

  1. An improved maximum power point tracking method for a photovoltaic system

    Science.gov (United States)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  2. The challenge of conceptual stretching in multi-method research

    OpenAIRE

    Ahram, Ariel

    2009-01-01

    Multi-method research (MMR) has gained enthusiastic support among political scientists in recent years. Much of the impetus for MMR has been based on the seemingly intuitive logic of convergent triangulation: two tests are better than one, since a hypothesis that had survived a series of tests with different methods would be regarded as more valid than a hypothesis tested only a single method. In their seminal Design-ing Social Inquiry, King, Keohane, and Verba (1994) argue that combining qu...

  3. Behavioral Changes Based on a Course in Agroecology: A Mixed Methods Study

    Science.gov (United States)

    Harms, Kristyn; King, James; Francis, Charles

    2009-01-01

    This study evaluated and described student perceptions of a course in agroecology to determine if participants experienced changed perceptions and behaviors resulting from the Agroecosystems Analysis course. A triangulation validating quantitative data mixed methods approach included a written survey comprised of both quantitative and open-ended…

  4. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  5. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    International Nuclear Information System (INIS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-01-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  6. Analysis of tree stand horizontal structure using random point field methods

    Directory of Open Access Journals (Sweden)

    O. P. Sekretenko

    2015-06-01

    Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.

  7. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  8. Fixed point theorems in locally convex spaces—the Schauder mapping method

    Directory of Open Access Journals (Sweden)

    S. Cobzaş

    2006-03-01

    Full Text Available In the appendix to the book by F. F. Bonsal, Lectures on Some Fixed Point Theorems of Functional Analysis (Tata Institute, Bombay, 1962 a proof by Singbal of the Schauder-Tychonoff fixed point theorem, based on a locally convex variant of Schauder mapping method, is included. The aim of this note is to show that this method can be adapted to yield a proof of Kakutani fixed point theorem in the locally convex case. For the sake of completeness we include also the proof of Schauder-Tychonoff theorem based on this method. As applications, one proves a theorem of von Neumann and a minimax result in game theory.

  9. Basin boundaries and focal points in a map coming from Bairstow's method.

    Science.gov (United States)

    Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele

    1999-06-01

    This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.

  10. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  11. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    Science.gov (United States)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  12. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    International Nuclear Information System (INIS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)

  13. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  14. Methods for solving the stochastic point reactor kinetic equations

    International Nuclear Information System (INIS)

    Quabili, E.R.; Karasulu, M.

    1979-01-01

    Two new methods are presented for analysis of the statistical properties of nonlinear outputs of a point reactor to stochastic non-white reactivity inputs. They are Bourret's approximation and logarithmic linearization. The results have been compared with the exact results, previously obtained in the case of Gaussian white reactivity input. It was found that when the reactivity noise has short correlation time, Bourret's approximation should be recommended because it yields results superior to those yielded by logarithmic linearization. When the correlation time is long, Bourret's approximation is not valid, but in that case, if one can assume the reactivity noise to be Gaussian, one may use the logarithmic linearization. (author)

  15. Modular correction method of bending elastic modulus based on sliding behavior of contact point

    International Nuclear Information System (INIS)

    Ma, Zhichao; Zhao, Hongwei; Zhang, Qixun; Liu, Changyi

    2015-01-01

    During the three-point bending test, the sliding behavior of the contact point between the specimen and supports was observed, the sliding behavior was verified to affect the measurements of both deflection and span length, which directly affect the calculation of the bending elastic modulus. Based on the Hertz formula to calculate the elastic contact deformation and the theoretical calculation of the sliding behavior of the contact point, a theoretical model to precisely describe the deflection and span length as a function of bending load was established. Moreover, a modular correction method of bending elastic modulus was proposed, via the comparison between the corrected elastic modulus of three materials (H63 copper–zinc alloy, AZ31B magnesium alloy and 2026 aluminum alloy) and the standard modulus obtained from standard uniaxial tensile tests, the universal feasibility of the proposed correction method was verified. Also, the ratio of corrected to raw elastic modulus presented a monotonically decreasing tendency as the raw elastic modulus of materials increased. (technical note)

  16. A simple method for determining the critical point of the soil water retention curve

    DEFF Research Database (Denmark)

    Chen, Chong; Hu, Kelin; Ren, Tusheng

    2017-01-01

    he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...

  17. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    Science.gov (United States)

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  18. Quadtree of TIN: a new algorithm of dynamic LOD

    Science.gov (United States)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  19. Modelo digital do terreno através de diferentes interpolações do programa Surfer 12 | Digital terrain model through different interpolations in the surfer 12 software

    Directory of Open Access Journals (Sweden)

    José Machado

    2016-04-01

    the MDT interpolation of measured points is required. The use of TDM, 3D surfaces and contours in moving fast computer programs and can create some problems, such as the type of interpolation used. This work aims to analyze the interpolation methods in points quoted from an irregular geometric figure generated by the Surfer program. They used 12 interpolations available (Data Metrics, Inverse Distance, Kriging, Local Polynomial, Minimum Curvature, Modified Shepard Method, Moving Average, Natural Neighbor, Nearest Neighbor, Polynomial Regression, Radial fuction and Triangulation with Linear Interpolation and analyzed the generated topographic maps. The relief was generated graphical representation via the MDT. They were awarded the excellent concepts, excellent, good, average and bad representation of relief and discussed according Relief representations to the listed geometric image. Data Metrics, Polynomial Regression, Moving Average e Local Polynomial (bad; Moving Average e Modified Shepard Method (regular; Nearest Neighbor (media; Inverse Distance (good; Kriging e Radial Function (great e Triangulation With Linear Interpolation e Natural Neighbor (excellent conditions to representation presented dates.

  20. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  1. Inversion of Gravity Anomalies Using Primal-Dual Interior Point Methods

    Directory of Open Access Journals (Sweden)

    Aaron A. Velasco

    2016-06-01

    Full Text Available Structural inversion of gravity datasets based on the use of density anomalies to derive robust images of the subsurface (delineating lithologies and their boundaries constitutes a fundamental non-invasive tool for geological exploration. The use of experimental techniques in geophysics to estimate and interpret di erences in the substructure based on its density properties have proven e cient; however, the inherent non-uniqueness associated with most geophysical datasets make this the ideal scenario for the use of recently developed robust constrained optimization techniques. We present a constrained optimization approach for a least squares inversion problem aimed to characterize 2-Dimensional Earth density structure models based on Bouguer gravity anomalies. The proposed formulation is solved with a Primal-Dual Interior-Point method including equality and inequality physical and structural constraints. We validate our results using synthetic density crustal structure models with varying complexity and illustrate the behavior of the algorithm using di erent initial density structure models and increasing noise levels in the observations. Based on these implementations, we conclude that the algorithm using Primal-Dual Interior-Point methods is robust, and its results always honor the geophysical constraints. Some of the advantages of using this approach for structural inversion of gravity data are the incorporation of a priori information related to the model parameters (coming from actual physical properties of the subsurface and the reduction of the solution space contingent on these boundary conditions.

  2. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    NARCIS (Netherlands)

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,

  3. Numerical methods for polyline-to-point-cloud registration with applications to patient-specific stent reconstruction.

    Science.gov (United States)

    Lin, Claire Yilin; Veneziani, Alessandro; Ruthotto, Lars

    2018-03-01

    We present novel numerical methods for polyline-to-point-cloud registration and their application to patient-specific modeling of deployed coronary artery stents from image data. Patient-specific coronary stent reconstruction is an important challenge in computational hemodynamics and relevant to the design and improvement of the prostheses. It is an invaluable tool in large-scale clinical trials that computationally investigate the effect of new generations of stents on hemodynamics and eventually tissue remodeling. Given a point cloud of strut positions, which can be extracted from images, our stent reconstruction method aims at finding a geometrical transformation that aligns a model of the undeployed stent to the point cloud. Mathematically, we describe the undeployed stent as a polyline, which is a piecewise linear object defined by its vertices and edges. We formulate the nonlinear registration as an optimization problem whose objective function consists of a similarity measure, quantifying the distance between the polyline and the point cloud, and a regularization functional, penalizing undesired transformations. Using projections of points onto the polyline structure, we derive novel distance measures. Our formulation supports most commonly used transformation models including very flexible nonlinear deformations. We also propose 2 regularization approaches ensuring the smoothness of the estimated nonlinear transformation. We demonstrate the potential of our methods using an academic 2D example and a real-life 3D bioabsorbable stent reconstruction problem. Our results show that the registration problem can be solved to sufficient accuracy within seconds using only a few number of Gauss-Newton iterations. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Nonparametric Change Point Diagnosis Method of Concrete Dam Crack Behavior Abnormality

    OpenAIRE

    Li, Zhanchao; Gu, Chongshi; Wu, Zhongru

    2013-01-01

    The study on diagnosis method of concrete crack behavior abnormality has always been a hot spot and difficulty in the safety monitoring field of hydraulic structure. Based on the performance of concrete dam crack behavior abnormality in parametric statistical model and nonparametric statistical model, the internal relation between concrete dam crack behavior abnormality and statistical change point theory is deeply analyzed from the model structure instability of parametric statistical model ...

  5. Curvature computation in volume-of-fluid method based on point-cloud sampling

    Science.gov (United States)

    Kassar, Bruno B. M.; Carneiro, João N. E.; Nieckele, Angela O.

    2018-01-01

    This work proposes a novel approach to compute interface curvature in multiphase flow simulation based on Volume of Fluid (VOF) method. It is well documented in the literature that curvature and normal vector computation in VOF may lack accuracy mainly due to abrupt changes in the volume fraction field across the interfaces. This may cause deterioration on the interface tension forces estimates, often resulting in inaccurate results for interface tension dominated flows. Many techniques have been presented over the last years in order to enhance accuracy in normal vectors and curvature estimates including height functions, parabolic fitting of the volume fraction, reconstructing distance functions, coupling Level Set method with VOF, convolving the volume fraction field with smoothing kernels among others. We propose a novel technique based on a representation of the interface by a cloud of points. The curvatures and the interface normal vectors are computed geometrically at each point of the cloud and projected onto the Eulerian grid in a Front-Tracking manner. Results are compared to benchmark data and significant reduction on spurious currents as well as improvement in the pressure jump are observed. The method was developed in the open source suite OpenFOAM® extending its standard VOF implementation, the interFoam solver.

  6. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  7. Change-Point Detection Method for Clinical Decision Support System Rule Monitoring.

    Science.gov (United States)

    Liu, Siqi; Wright, Adam; Hauskrecht, Milos

    2017-06-01

    A clinical decision support system (CDSS) and its components can malfunction due to various reasons. Monitoring the system and detecting its malfunctions can help one to avoid any potential mistakes and associated costs. In this paper, we investigate the problem of detecting changes in the CDSS operation, in particular its monitoring and alerting subsystem, by monitoring its rule firing counts. The detection should be performed online, that is whenever a new datum arrives, we want to have a score indicating how likely there is a change in the system. We develop a new method based on Seasonal-Trend decomposition and likelihood ratio statistics to detect the changes. Experiments on real and simulated data show that our method has a lower delay in detection compared with existing change-point detection methods.

  8. A Lightweight Surface Reconstruction Method for Online 3D Scanning Point Cloud Data Oriented toward 3D Printing

    Directory of Open Access Journals (Sweden)

    Buyun Sheng

    2018-01-01

    Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.

  9. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  10. Numerical analysis for multi-group neutron-diffusion equation using Radial Point Interpolation Method (RPIM)

    International Nuclear Information System (INIS)

    Kim, Kyung-O; Jeong, Hae Sun; Jo, Daeseong

    2017-01-01

    Highlights: • Employing the Radial Point Interpolation Method (RPIM) in numerical analysis of multi-group neutron-diffusion equation. • Establishing mathematical formation of modified multi-group neutron-diffusion equation by RPIM. • Performing the numerical analysis for 2D critical problem. - Abstract: A mesh-free method is introduced to overcome the drawbacks (e.g., mesh generation and connectivity definition between the meshes) of mesh-based (nodal) methods such as the finite-element method and finite-difference method. In particular, the Point Interpolation Method (PIM) using a radial basis function is employed in the numerical analysis for the multi-group neutron-diffusion equation. The benchmark calculations are performed for the 2D homogeneous and heterogeneous problems, and the Multiquadrics (MQ) and Gaussian (EXP) functions are employed to analyze the effect of the radial basis function on the numerical solution. Additionally, the effect of the dimensionless shape parameter in those functions on the calculation accuracy is evaluated. According to the results, the radial PIM (RPIM) can provide a highly accurate solution for the multiplication eigenvalue and the neutron flux distribution, and the numerical solution with the MQ radial basis function exhibits the stable accuracy with respect to the reference solutions compared with the other solution. The dimensionless shape parameter directly affects the calculation accuracy and computing time. Values between 1.87 and 3.0 for the benchmark problems considered in this study lead to the most accurate solution. The difference between the analytical and numerical results for the neutron flux is significantly increased in the edge of the problem geometry, even though the maximum difference is lower than 4%. This phenomenon seems to arise from the derivative boundary condition at (x,0) and (0,y) positions, and it may be necessary to introduce additional strategy (e.g., the method using fictitious points and

  11. H-Point Standard Addition Method for Simultaneous Determination of Eosin and Erytrosine

    Directory of Open Access Journals (Sweden)

    Amandeep Kaur

    2011-01-01

    Full Text Available A new, simple, sensitive and selective H-point standard addition method (HPSAM has been developed for resolving binary mixture of food colorants eosin and erythrosine, which show overlapped spectra. The method is based on the complexation of food dyes eosin and erythrosine with Fe(III complexing reagent at pH 5.5 and solubilizing complexes in triton x-100 micellar media. Absorbances at the two pairs of wavelengths, 540 and 550 nm (when eosin acts as analyte or 518 and 542 nm (when erythrosine act as analyte were monitored. This method has satisfactorily been applied for the determination of eosin and erythrosine dyes in synthetic mixtures and commercial products.

  12. Point-point and point-line moving-window correlation spectroscopy and its applications

    Science.gov (United States)

    Zhou, Qun; Sun, Suqin; Zhan, Daqi; Yu, Zhiwu

    2008-07-01

    In this paper, we present a new extension of generalized two-dimensional (2D) correlation spectroscopy. Two new algorithms, namely point-point (P-P) correlation and point-line (P-L) correlation, have been introduced to do the moving-window 2D correlation (MW2D) analysis. The new method has been applied to a spectral model consisting of two different processes. The results indicate that P-P correlation spectroscopy can unveil the details and re-constitute the entire process, whilst the P-L can provide general feature of the concerned processes. Phase transition behavior of dimyristoylphosphotidylethanolamine (DMPE) has been studied using MW2D correlation spectroscopy. The newly proposed method verifies that the phase transition temperature is 56 °C, same as the result got from a differential scanning calorimeter. To illustrate the new method further, a lysine and lactose mixture has been studied under thermo perturbation. Using the P-P MW2D, the Maillard reaction of the mixture was clearly monitored, which has been very difficult using conventional display of FTIR spectra.

  13. A nodal method applied to a diffusion problem with generalized coefficients

    International Nuclear Information System (INIS)

    Laazizi, A.; Guessous, N.

    1999-01-01

    In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)

  14. Model reduction method using variable-separation for stochastic saddle point problems

    Science.gov (United States)

    Jiang, Lijian; Li, Qiuqi

    2018-02-01

    In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.

  15. Explaining Post-Communist Respect for Civil Liberty: A Multi-Methods Test

    DEFF Research Database (Denmark)

    Skaaning, Svend Erik

    2007-01-01

    This article explains the level of respect for civil liberty in post-communist countries. The methodological triangulation employs both QCA methods and OLS-regression to test the influence of structural conditions, the democratization literature emphasizes. The results show that the political leg...... the results of the methods applied diverge. Expect a lack of congruence given their different assumptions and logics. As to the QCA methods in specific, they are apparently valuable supplements, and at times even plausible alternatives, to standard statistical tests....

  16. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    International Nuclear Information System (INIS)

    Su Xiaoxing; Wang Yuesheng

    2010-01-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  17. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-09-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  18. Iterative method to compute the Fermat points and Fermat distances of multiquarks

    International Nuclear Information System (INIS)

    Bicudo, P.; Cardoso, M.

    2009-01-01

    The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.

  19. Iterative method to compute the Fermat points and Fermat distances of multiquarks

    Energy Technology Data Exchange (ETDEWEB)

    Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)

    2009-04-13

    The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.

  20. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    Science.gov (United States)

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  1. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  2. Evaluation of the 5 and 8 pH point titration methods for monitoring anaerobic digesters treating solid waste.

    Science.gov (United States)

    Vannecke, T P W; Lampens, D R A; Ekama, G A; Volcke, E I P

    2015-01-01

    Simple titration methods certainly deserve consideration for on-site routine monitoring of volatile fatty acid (VFA) concentration and alkalinity during anaerobic digestion (AD), because of their simplicity, speed and cost-effectiveness. In this study, the 5 and 8 pH point titration methods for measuring the VFA concentration and carbonate system alkalinity (H2CO3*-alkalinity) were assessed and compared. For this purpose, synthetic solutions with known H2CO3*-alkalinity and VFA concentration as well as samples from anaerobic digesters treating three different kind of solid wastes were analysed. The results of these two related titration methods were verified with photometric and high-pressure liquid chromatography measurements. It was shown that photometric measurements lead to overestimations of the VFA concentration in the case of coloured samples. In contrast, the 5 pH point titration method provides an accurate estimation of the VFA concentration, clearly corresponding with the true value. Concerning the H2CO3*-alkalinity, the most accurate and precise estimations, showing very similar results for repeated measurements, were obtained using the 8 pH point titration. Overall, it was concluded that the 5 pH point titration method is the preferred method for the practical monitoring of AD of solid wastes due to its robustness, cost efficiency and user-friendliness.

  3. Starting Point: Linking Methods and Materials for Introductory Geoscience Courses

    Science.gov (United States)

    Manduca, C. A.; MacDonald, R. H.; Merritts, D.; Savina, M.

    2004-12-01

    Introductory courses are one of the most challenging teaching environments for geoscience faculty. Courses are often large, students have a wide variety of background and skills, and student motivation can include completing a geoscience major, preparing for a career as teacher, fulfilling a distribution requirement, and general interest. The Starting Point site (http://serc.carleton.edu/introgeo/index.html) provides help for faculty teaching introductory courses by linking together examples of different teaching methods that have been used in entry-level courses with information about how to use the methods and relevant references from the geoscience and education literature. Examples span the content of geoscience courses including the atmosphere, biosphere, climate, Earth surface, energy/material cycles, human dimensions/resources, hydrosphere/cryosphere, ocean, solar system, solid earth and geologic time/earth history. Methods include interactive lecture (e.g think-pair-share, concepTests, and in-class activities and problems), investigative cases, peer review, role playing, Socratic questioning, games, and field labs. A special section of the site devoted to using an Earth System approach provides resources with content information about the various aspects of the Earth system linked to examples of teaching this content. Examples of courses incorporating Earth systems content, and strategies for designing an Earth system course are also included. A similar section on Teaching with an Earth History approach explores geologic history as a vehicle for teaching geoscience concepts and as a framework for course design. The Starting Point site has been authored and reviewed by faculty around the country. Evaluation indicates that faculty find the examples particularly helpful both for direct implementation in their classes and for sparking ideas. The help provided for using different teaching methods makes the examples particularly useful. Examples are chosen from

  4. Cloud-point measurement for (sulphate salts + polyethylene glycol 15000 + water) systems by the particle counting method

    International Nuclear Information System (INIS)

    Imani, A.; Modarress, H.; Eliassi, A.; Abdous, M.

    2009-01-01

    The phase separation of (water + salt + polyethylene glycol 15000) systems was studied by cloud-point measurements using the particle counting method. The effect of three kinds of sulphate salt (Na 2 SO 4 , K 2 SO 4 , (NH 4 ) 2 SO 4 ) concentration, polyethylene glycol 15000 concentration, mass ratio of polymer to salt on the cloud-point temperature of these systems have been investigated. The results obtained indicate that the cloud-point temperatures decrease linearly with increase in polyethylene glycol concentrations for different salts. Also, the cloud points decrease with an increase in mass ratio of salt to polymer.

  5. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  6. Comparisons of adaptive TIN modelling filtering method and threshold segmentation filtering method of LiDAR point cloud

    International Nuclear Information System (INIS)

    Chen, Lin; Fan, Xiangtao; Du, Xiaoping

    2014-01-01

    Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences

  7. Practical dose point-based methods to characterize dose distribution in a stationary elliptical body phantom for a cone-beam C-arm CT system

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Mechanical Engineering, Stanford University, Stanford, California 94305 (United States); Constantin, Dragos [Microwave Physics R& E, Varian Medical Systems, Palo Alto, California 94304 (United States); Ganguly, Arundhuti; Girard, Erin; Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States); Morin, Richard L. [Mayo Clinic Jacksonville, Jacksonville, Florida 32224 (United States); Dixon, Robert L. [Department of Radiology, Wake Forest University, Winston-Salem, North Carolina 27157 (United States)

    2015-08-15

    Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp) and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1

  8. Point Measurements of Fermi Velocities by a Time-of-Flight Method

    DEFF Research Database (Denmark)

    Falk, David S.; Henningsen, J. O.; Skriver, Hans Lomholt

    1972-01-01

    The present paper describes in detail a new method of obtaining information about the Fermi velocity of electrons in metals, point by point, along certain contours on the Fermi surface. It is based on transmission of microwaves through thin metal slabs in the presence of a static magnetic field...... applied parallel to the surface. The electrons carry the signal across the slab and arrive at the second surface with a phase delay which is measured relative to a reference signal; the velocities are derived by analyzing the magnetic field dependence of the phase delay. For silver we have in this way...... obtained one component of the velocity along half the circumference of the centrally symmetric orbit for B→∥[100]. The results are in agreement with current models for the Fermi surface. For B→∥[011], the electrons involved are not moving in a symmetry plane of the Fermi surface. In such cases one cannot...

  9. Some error estimates for the lumped mass finite element method for a parabolic problem

    KAUST Repository

    Chatzipantelidis, P.

    2012-01-01

    We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.

  10. Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology

    Directory of Open Access Journals (Sweden)

    Qiuqiu WEN

    2017-06-01

    Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.

  11. Short run hydrothermal coordination with network constraints using an interior point method

    International Nuclear Information System (INIS)

    Lopez Lezama, Jesus Maria; Gallego Pareja, Luis Alfonso; Mejia Giraldo, Diego

    2008-01-01

    This paper presents a lineal optimization model to solve the hydrothermal coordination problem. The main contribution of this work is the inclusion of the network constraints to the hydrothermal coordination problem and its solution using an interior point method. The proposed model allows working with a system that can be completely hydraulic, thermal or mixed. Results are presented on the IEEE 14 bus test system

  12. Group vector space method for estimating enthalpy of vaporization of organic compounds at the normal boiling point.

    Science.gov (United States)

    Wenying, Wei; Jinyu, Han; Wen, Xu

    2004-01-01

    The specific position of a group in the molecule has been considered, and a group vector space method for estimating enthalpy of vaporization at the normal boiling point of organic compounds has been developed. Expression for enthalpy of vaporization Delta(vap)H(T(b)) has been established and numerical values of relative group parameters obtained. The average percent deviation of estimation of Delta(vap)H(T(b)) is 1.16, which show that the present method demonstrates significant improvement in applicability to predict the enthalpy of vaporization at the normal boiling point, compared the conventional group methods.

  13. A Mixed-Methods Study Investigating the Relationship between Media Multitasking Orientation and Grade Point Average

    Science.gov (United States)

    Lee, Jennifer

    2012-01-01

    The intent of this study was to examine the relationship between media multitasking orientation and grade point average. The study utilized a mixed-methods approach to investigate the research questions. In the quantitative section of the study, the primary method of statistical analyses was multiple regression. The independent variables for the…

  14. Evaluating Point of Sale Tobacco Marketing Using Behavioral Laboratory Methods

    Science.gov (United States)

    Robinson, Jason D.; Drobes, David J.; Brandon, Thomas H.; Wetter, David W.; Cinciripini, Paul M.

    2018-01-01

    With passage of the 2009 Family Smoking Prevention and Tobacco Control Act, the FDA has authority to regulate tobacco advertising. As bans on traditional advertising venues and promotion of tobacco products have grown, a greater emphasis has been placed on brand exposure and price promotion in displays of products at the point-of-sale (POS). POS marketing seeks to influence attitudes and behavior towards tobacco products using a variety of explicit and implicit messaging approaches. Behavioral laboratory methods have the potential to provide the FDA with a strong scientific base for regulatory actions and a model for testing future manipulations of POS advertisements. We review aspects of POS marketing that potentially influence smoking behavior, including branding, price promotions, health claims, the marketing of emerging tobacco products, and tobacco counter-advertising. We conceptualize how POS marketing potentially influence individual attention, memory, implicit attitudes, and smoking behavior. Finally, we describe specific behavioral laboratory methods that can be adapted to measure the impact of POS marketing on these domains.

  15. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real

  16. Developing Common Set of Weights with Considering Nondiscretionary Inputs and Using Ideal Point Method

    Directory of Open Access Journals (Sweden)

    Reza Kiani Mavi

    2013-01-01

    Full Text Available Data envelopment analysis (DEA is used to evaluate the performance of decision making units (DMUs with multiple inputs and outputs in a homogeneous group. In this way, the acquired relative efficiency score for each decision making unit lies between zero and one where a number of them may have an equal efficiency score of one. DEA successfully divides them into two categories of efficient DMUs and inefficient DMUs. A ranking for inefficient DMUs is given but DEA does not provide further information about the efficient DMUs. One of the popular methods for evaluating and ranking DMUs is the common set of weights (CSW method. We generate a CSW model with considering nondiscretionary inputs that are beyond the control of DMUs and using ideal point method. The main idea of this approach is to minimize the distance between the evaluated decision making unit and the ideal decision making unit (ideal point. Using an empirical example we put our proposed model to test by applying it to the data of some 20 bank branches and rank their efficient units.

  17. Terrain Simplification Research in Augmented Scene Modeling

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    environment. As one of the most important tasks in augmented scene modeling, terrain simplification research has gained more and more attention. In this paper, we mainly focus on point selection problem in terrain simplification using triangulated irregular network. Based on the analysis and comparison of traditional importance measures for each input point, we put forward a new importance measure based on local entropy. The results demonstrate that the local entropy criterion has a better performance than any traditional methods. In addition, it can effectively conquer the "short-sight" problem associated with the traditional methods.

  18. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  19. Interesting Interest Points

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup

    2012-01-01

    on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard......Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based...... position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale...

  20. A Mixed-Methods Analysis in Assessing Students' Professional Development by Applying an Assessment for Learning Approach.

    Science.gov (United States)

    Peeters, Michael J; Vaidya, Varun A

    2016-06-25

    Objective. To describe an approach for assessing the Accreditation Council for Pharmacy Education's (ACPE) doctor of pharmacy (PharmD) Standard 4.4, which focuses on students' professional development. Methods. This investigation used mixed methods with triangulation of qualitative and quantitative data to assess professional development. Qualitative data came from an electronic developmental portfolio of professionalism and ethics, completed by PharmD students during their didactic studies. Quantitative confirmation came from the Defining Issues Test (DIT)-an assessment of pharmacists' professional development. Results. Qualitatively, students' development reflections described growth through this course series. Quantitatively, the 2015 PharmD class's DIT N2-scores illustrated positive development overall; the lower 50% had a large initial improvement compared to the upper 50%. Subsequently, the 2016 PharmD class confirmed these average initial improvements of students and also showed further substantial development among students thereafter. Conclusion. Applying an assessment for learning approach, triangulation of qualitative and quantitative assessments confirmed that PharmD students developed professionally during this course series.

  1. Development of modifications to the material point method for the simulation of thin membranes, compressible fluids, and their interactions

    Energy Technology Data Exchange (ETDEWEB)

    York, A.R. II [Sandia National Labs., Albuquerque, NM (United States). Engineering and Process Dept.

    1997-07-01

    The material point method (MPM) is an evolution of the particle in cell method where Lagrangian particles or material points are used to discretize the volume of a material. The particles carry properties such as mass, velocity, stress, and strain and move through a Eulerian or spatial mesh. The momentum equation is solved on the Eulerian mesh. Modifications to the material point method are developed that allow the simulation of thin membranes, compressible fluids, and their dynamic interactions. A single layer of material points through the thickness is used to represent a membrane. The constitutive equation for the membrane is applied in the local coordinate system of each material point. Validation problems are presented and numerical convergence is demonstrated. Fluid simulation is achieved by implementing a constitutive equation for a compressible, viscous, Newtonian fluid and by solution of the energy equation. The fluid formulation is validated by simulating a traveling shock wave in a compressible fluid. Interactions of the fluid and membrane are handled naturally with the method. The fluid and membrane communicate through the Eulerian grid on which forces are calculated due to the fluid and membrane stress states. Validation problems include simulating a projectile impacting an inflated airbag. In some impact simulations with the MPM, bodies may tend to stick together when separating. Several algorithms are proposed and tested that allow bodies to separate from each other after impact. In addition, several methods are investigated to determine the local coordinate system of a membrane material point without relying upon connectivity data.

  2. Evaluation methods in health promotion programmes: the description of a triangulation in Brazil Métodos em avaliação de programas de promoção da saúde: descrição de uma triangulação no Brasil

    Directory of Open Access Journals (Sweden)

    Elza Maria de Souza

    2010-08-01

    Full Text Available Evaluation is a key word in any health promotion programme. However, it is a challenge to choose the most appropriate method of evaluation. The purpose of this paper was to describe a triangulated methodology developed to evaluate a health promotion intervention based on intergenerational activities and to describe the theoretical framework, which guided the study. From February to December 2002 a triangulation involving a community controlled trial, focus groups technique and observation process was developed in Ceilândia, Federal District of Brazil. Samples of 253 students aged 12 to 18 years old from a secondary school and 266 elders aged 60 and over from the local area were randomly allocated to control and experimental groups. Over four months, 111 students and 32 elders had weekly meetings at school to share their life histories. Before and after the intervention, a questionnaire was administered to control and experimental groups including the outcome variables. Although the study had some limitations, it was valuable in showing for the first time that this method can be used in interventions of this kind and also to show the importance of developing a theoretical framework to understand possible mechanism of interactions in intergenerational interventions.Avaliação é palavra-chave em promoção de saúde. O objetivo desse estudo foi descrever uma triangulação para avaliar um programa de promoção de saúde baseada em atividades intergeracionais e descrever o modelo teórico utilizado para a investigação. Foi desenvolvida no período de fevereiro a dezembro de 2002 uma pesquisa incluindo um estudo controlado na comunidade, grupos focais e observação de processo. Foram selecionadas duas amostras randomizadas de 253 estudantes, com idade entre 12 e 18 anos de uma escola de ensino fundamental e 266 idosos com idade igual ou superior a 60 anos, residentes nas redondezas da escola. As duas amostras foram divididas e alocadas

  3. Formulations to overcome the divergence of iterative method of fixed-point in nonlinear equations solution

    Directory of Open Access Journals (Sweden)

    Wilson Rodríguez Calderón

    2015-04-01

    Full Text Available When we need to determine the solution of a nonlinear equation there are two options: closed-methods which use intervals that contain the root and during the iterative process reduce the size of natural way, and, open-methods that represent an attractive option as they do not require an initial interval enclosure. In general, we know open-methods are more efficient computationally though they do not always converge. In this paper we are presenting a divergence case analysis when we use the method of fixed point iteration to find the normal height in a rectangular channel using the Manning equation. To solve this problem, we propose applying two strategies (developed by authors that allow to modifying the iteration function making additional formulations of the traditional method and its convergence theorem. Although Manning equation is solved with other methods like Newton when we use the iteration method of fixed-point an interesting divergence situation is presented which can be solved with a convergence higher than quadratic over the initial iterations. The proposed strategies have been tested in two cases; a study of divergence of square root of real numbers was made previously by authors for testing. Results in both cases have been successful. We present comparisons because are important for seeing the advantage of proposed strategies versus the most representative open-methods.

  4. A nine-point pH titration method to determine low-concentration VFA in municipal wastewater.

    Science.gov (United States)

    Ai, Hainan; Zhang, Daijun; Lu, Peili; He, Qiang

    2011-01-01

    Characterization of volatile fatty acid (VFA) in wastewater is significant for understanding the wastewater nature and the wastewater treatment process optimization based on the usage of Activated Sludge Models (ASMs). In this study, a nine-point pH titration method was developed for the determination of low-concentration VFA in municipal wastewater. The method was evaluated using synthetic wastewater containing VFA with the concentration of 10-50 mg/l and the possible interfering buffer systems of carbonate, phosphate and ammonium similar to those in real municipal wastewater. In addition, the further evaluation was conducted through the assay of real wastewater using chromatography as reference. The results showed that the recovery of VFA in the synthetic wastewater was 92%-102 and the coefficient of variance (CV) of reduplicate measurements 1.68%-4.72%. The changing content of the buffering substances had little effect on the accuracy of the method. Moreover, the titration method was agreed with chromatography in the determination of VFA in real municipal wastewater with R(2)= 0.9987 and CV =1.3-1.7. The nine-point pH titration method is capable of satisfied determination of low-concentration VFA in municipal wastewater.

  5. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  6. Evaluation of spray and point inoculation methods for the phenotyping of Puccinia striiformis on wheat

    DEFF Research Database (Denmark)

    Sørensen, Chris Khadgi; Thach, Tine; Hovmøller, Mogens Støvring

    2016-01-01

    flexible application procedure for spray inoculation and it gave highly reproducible results for virulence phenotyping. Six point inoculation methods were compared to find the most suitable for assessment of pathogen aggressiveness. The use of Novec 7100 and dry dilution with Lycopodium spores gave...... for the assessment of quantitative epidemiological parameters. New protocols for spray and point inoculation of P. striiformis on wheat are presented, along with the prospect for applying these in rust research and resistance breeding activities....

  7. Three-point method for measuring the geometric error components of linear and rotary axes based on sequential multilateration

    International Nuclear Information System (INIS)

    Zhang, Zhenjiu; Hu, Hong

    2013-01-01

    The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.

  8. A modified likelihood-method to search for point-sources in the diffuse astrophysical neutrino-flux in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Reimann, Rene; Haack, Christian; Leuermann, Martin; Raedel, Leif; Schoenen, Sebastian; Schimp, Michael; Wiebusch, Christopher [III. Physikalisches Institut, RWTH Aachen (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    IceCube, a cubic-kilometer sized neutrino detector at the geographical South Pole, has recently measured a flux of high-energy astrophysical neutrinos. Although this flux has now been observed in multiple analyses, no point sources or source classes could be identified yet. Standard point source searches test many points in the sky for a point source of astrophysical neutrinos individually and therefore produce many trials. Our approach is to additionally use the measured diffuse spectrum to constrain the number of possible point sources and their properties. Initial studies of the method performance are shown.

  9. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  10. Development of the H-point standard additions method for coupled liquid-chromatography and UV-visible spectrophotometry

    Energy Technology Data Exchange (ETDEWEB)

    Campins-Falco, Pilar; Bosch-Reig, Francisco; Herraez-Hernandez, Rosa; Sevillano-Cabeza, Adela (Universidad de Valencia (Spain). Facultad de Quimica, Departamento de Quimica Analitica)

    1992-02-10

    This work establishes the fundamentals of the H-point standard additions method for liquid chromatography for the simultaneous analysis of binary mixtures with overlapped chromatographic peaks. The method was compared with the deconvolution method of peak suppression and the second derivative of elution profiles. Different mixtures of diuretics were satisfactorily resolved. (author). 21 refs.; 9 figs.; 2 tabs.

  11. An improved local radial point interpolation method for transient heat conduction analysis

    Science.gov (United States)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  12. An improved local radial point interpolation method for transient heat conduction analysis

    International Nuclear Information System (INIS)

    Wang Feng; Lin Gao; Hu Zhi-Qiang; Zheng Bao-Jing

    2013-01-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions

  13. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  14. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...

  15. Numerical methods for finding periodic points in discrete maps. High order islands chains and noble barriers in a toroidal magnetic configuration

    Energy Technology Data Exchange (ETDEWEB)

    Steinbrecher, G. [Association Euratom-Nasti Romania, Dept. of Theoretical Physics, Physics Faculty, University of Craiova (Romania); Reuss, J.D.; Misguich, J.H. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2001-11-01

    We first remind usual physical and mathematical concepts involved in the dynamics of Hamiltonian systems, and namely in chaotic systems described by discrete 2D maps (representing the intersection points of toroidal magnetic lines in a poloidal plane in situations of incomplete magnetic chaos in Tokamaks). Finding the periodic points characterizing chains of magnetic islands is an essential step not only to determine the skeleton of the phase space picture, but also to determine the flux of magnetic lines across semi-permeable barriers like Cantori. We discuss here several computational methods used to determine periodic points in N dimensions, which amounts to solve a set of N nonlinear coupled equations: Newton method, minimization techniques, Laplace or steepest descend method, conjugated direction method and Fletcher-Reeves method. We have succeeded to improve this last method in an important way, without modifying its useful double-exponential convergence. This improved method has been tested and applied to finding periodic points of high order m in the 2D 'Tokamap' mapping, for values of m along rational chains of winding number n/m converging towards a noble value where a Cantorus exists. Such precise positions of periodic points have been used in the calculation of the flux across this Cantorus. (authors)

  16. Outcomes and impact of HIV prevention, ART and TB programs in Swaziland--early evidence from public health triangulation.

    Science.gov (United States)

    van Schalkwyk, Cari; Mndzebele, Sibongile; Hlophe, Thabo; Garcia Calleja, Jesus Maria; Korenromp, Eline L; Stoneburner, Rand; Pervilhac, Cyril

    2013-01-01

    Swaziland's severe HIV epidemic inspired an early national response since the late 1980s, and regular reporting of program outcomes since the onset of a national antiretroviral treatment (ART) program in 2004. We assessed effectiveness outcomes and mortality trends in relation to ART, HIV testing and counseling (HTC), tuberculosis (TB) and prevention of mother to child transmission (PMTCT). Data triangulated include intervention coverage and outcomes according to program registries (2001-2010), hospital admissions and deaths disaggregated by age and sex (2001-2010) and population mortality estimates from the 1997 and 2007 censuses and the 2007 demographic and health survey. By 2010, ART reached 70% of the estimated number of people living with HIV/AIDS with CD4impact to specific interventions (versus natural epidemic dynamics) will require additional data from future household surveys, and improved routine (program, surveillance, and hospital) data at district level.

  17. An optical method for measuring the thickness of a falling condensate in gravity assisted heat pipe

    Directory of Open Access Journals (Sweden)

    Kasanický Martin

    2015-01-01

    Full Text Available A large number of variables is the main problem of designing systems which uses heat pipes, whether it is a traditional - gravity, or advanced - capillary, pulsating, advanced heat pipes. This article is a methodology for measuring the thickness of the falling condensate in gravitational heat pipes, with using the optical triangulation method, and the evaluation of risks associated with this method.

  18. High precision micro-scale Hall Effect characterization method using in-line micro four-point probes

    DEFF Research Database (Denmark)

    Petersen, Dirch Hjorth; Hansen, Ole; Lin, Rong

    2008-01-01

    Accurate characterization of ultra shallow junctions (USJ) is important in order to understand the principles of junction formation and to develop the appropriate implant and annealing technologies. We investigate the capabilities of a new micro-scale Hall effect measurement method where Hall...... effect is measured with collinear micro four-point probes (M4PP). We derive the sensitivity to electrode position errors and describe a position error suppression method to enable rapid reliable Hall effect measurements with just two measurement points. We show with both Monte Carlo simulations...... and experimental measurements, that the repeatability of a micro-scale Hall effect measurement is better than 1 %. We demonstrate the ability to spatially resolve Hall effect on micro-scale by characterization of an USJ with a single laser stripe anneal. The micro sheet resistance variations resulting from...

  19. Multi-point probe for testing electrical properties and a method of producing a multi-point probe

    DEFF Research Database (Denmark)

    2011-01-01

    A multi-point probe for testing electrical properties of a number of specific locations of a test sample comprises a supporting body defining a first surface, a first multitude of conductive probe arms (101-101'''), each of the probe arms defining a proximal end and a distal end. The probe arms...... of contact with the supporting body, and a maximum thickness perpendicular to its perpendicular bisector and its line of contact with the supporting body. Each of the probe arms has a specific area or point of contact (111-111''') at its distal end for contacting a specific location among the number...... of specific locations of the test sample. At least one of the probe arms has an extension defining a pointing distal end providing its specific area or point of contact located offset relative to its perpendicular bisector....

  20. An advanced analysis method of initial orbit determination with too short arc data

    Science.gov (United States)

    Li, Binzhe; Fang, Li

    2018-02-01

    This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.

  1. Simulating Ice Shelf Response to Potential Triggers of Collapse Using the Material Point Method

    Science.gov (United States)

    Huth, A.; Smith, B. E.

    2017-12-01

    Weakening or collapse of an ice shelf can reduce the buttressing effect of the shelf on its upstream tributaries, resulting in sea level rise as the flux of grounded ice into the ocean increases. Here we aim to improve sea level rise projections by developing a prognostic 2D plan-view model that simulates the response of an ice sheet/ice shelf system to potential triggers of ice shelf weakening or collapse, such as calving events, thinning, and meltwater ponding. We present initial results for Larsen C. Changes in local ice shelf stresses can affect flow throughout the entire domain, so we place emphasis on calibrating our model to high-resolution data and precisely evolving fracture-weakening and ice geometry throughout the simulations. We primarily derive our initial ice geometry from CryoSat-2 data, and initialize the model by conducting a dual inversion for the ice viscosity parameter and basal friction coefficient that minimizes mismatch between modeled velocities and velocities derived from Landsat data. During simulations, we implement damage mechanics to represent fracture-weakening, and track ice thickness evolution, grounding line position, and ice front position. Since these processes are poorly represented by the Finite Element Method (FEM) due to mesh resolution issues and numerical diffusion, we instead implement the Material Point Method (MPM) for our simulations. In MPM, the ice domain is discretized into a finite set of Lagrangian material points that carry all variables and are tracked throughout the simulation. Each time step, information from the material points is projected to a Eulerian grid where the momentum balance equation (shallow shelf approximation) is solved similarly to FEM, but essentially treating the material points as integration points. The grid solution is then used to determine the new positions of the material points and update variables such as thickness and damage in a diffusion-free Lagrangian frame. The grid does not store

  2. Slicing Method for curved façade and window extraction from point clouds

    Science.gov (United States)

    Iman Zolanvari, S. M.; Laefer, Debra F.

    2016-09-01

    Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.

  3. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  4. Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale

    Directory of Open Access Journals (Sweden)

    T. M. Bolotskaya

    2011-06-01

    Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.

  5. THREE-POINT BACKWARD FINITE DIFFERENCE METHOD FOR SOLVING A SYSTEM OF MIXED HYPERBOLIC-PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS. (R825549C019)

    Science.gov (United States)

    A three-point backward finite-difference method has been derived for a system of mixed hyperbolic¯¯parabolic (convection¯¯diffusion) partial differential equations (mixed PDEs). The method resorts to the three-point backward differenci...

  6. The Study Related to the Execution of a Triangulation Network in the Dump of Rovinari Pit, in Order to be Restored to the Economic Circuit

    Directory of Open Access Journals (Sweden)

    George Popescu

    2016-11-01

    Full Text Available The lignite mining extraction within the mining perimeter in Rovinari is carried out through mining works in the open, by using large equipments for the excavation, transport and storage of the mining material. These surfaces are currently being set up in the area of level two of the dump, the west and north-west part of Rovinari pit. In order to carry out the set-up works and of follow-up of the stability of the pit levels it is necessary to maintain the triangulation network.

  7. Barriers, facilitators and preferences for the physical activity of school children. Rationale and methods of a mixed study

    Directory of Open Access Journals (Sweden)

    Martínez-Andrés María

    2012-09-01

    Full Text Available Abstract Background Physical activity interventions in schools environment seem to have shown some effectiveness in the control of the current obesity epidemic in children. However the complexity of behaviors and the diversity of influences related to this problem suggest that we urgently need new lines of insight about how to support comprehensive population strategies of intervention. The aim of this study was to know the perceptions of the children from Cuenca, about their environmental barriers, facilitators and preferences for physical activity. Methods/Design We used a mixed-method design by combining two qualitative methods (analysis of individual drawings and focus groups together with the quantitative measurement of physical activity through accelerometers, in a theoretical sample of 121 children aged 9 and 11 years of schools in the province of Cuenca, Spain. Conclusions Mixed-method study is an appropriate strategy to know the perceptions of children about barriers and facilitators for physical activity, using both qualitative methods for a deeply understanding of their points of view, and quantitative methods for triangulate the discourse of participants with empirical data. We consider that this is an innovative approach that could provide knowledges for the development of more effective interventions to prevent childhood overweight.

  8. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    OpenAIRE

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step, which targets the ?+-center of the next pair of perturbed problems. As for the centering steps, we apply a sharper quadratic convergence result, which leads to a slightly wider neighborhood for th...

  9. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  10. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  11. Calculation and decomposition of spot price using interior point nonlinear optimisation methods

    International Nuclear Information System (INIS)

    Xie, K.; Song, Y.H.

    2004-01-01

    Optimal pricing for real and reactive power is a very important issue in a deregulation environment. This paper summarises the optimal pricing problem as an extended optimal power flow problem. Then, spot prices are decomposed into different components reflecting various ancillary services. The derivation of the proposed decomposition model is described in detail. Primary-Dual Interior Point method is applied to avoid 'go' 'no go' gauge. In addition, the proposed approach can be extended to cater for other types of ancillary services. (author)

  12. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  13. Gran method for end point anticipation in monosegmented flow titration

    Directory of Open Access Journals (Sweden)

    Aquino Emerson V

    2004-01-01

    Full Text Available An automatic potentiometric monosegmented flow titration procedure based on Gran linearisation approach has been developed. The controlling program can estimate the end point of the titration after the addition of three or four aliquots of titrant. Alternatively, the end point can be determined by the second derivative procedure. In this case, additional volumes of titrant are added until the vicinity of the end point and three points before and after the stoichiometric point are used for end point calculation. The performance of the system was assessed by the determination of chloride in isotonic beverages and parenteral solutions. The system employs a tubular Ag2S/AgCl indicator electrode. A typical titration, performed according to the IUPAC definition, requires only 60 mL of sample and about the same volume of titrant (AgNO3 solution. A complete titration can be carried out in 1 - 5 min. The accuracy and precision (relative standard deviation of ten replicates are 2% and 1% for the Gran and 1% and 0.5% for the Gran/derivative end point determination procedures, respectively. The proposed system reduces the time to perform a titration, ensuring low sample and reagent consumption, and full automatic sampling and titrant addition in a calibration-free titration protocol.

  14. Understanding Black Male Student Athletes' Experiences at a Historically Black College/University: A Mixed Methods Approach

    Science.gov (United States)

    Cooper, Joseph N.; Hall, Jori

    2016-01-01

    The purpose of this article is to describe how a mixed methods approach was employed to acquire a better understanding of Black male student athletes' experiences at a historically Black college/university in the southeastern United States. A concurrent triangulation design was incorporated to allow different data sources to be collected and…

  15. Some recent developments of the immersed interface method for flow simulation

    Science.gov (United States)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  16. A Field Evaluation of the Time-of-Detection Method to Estimate Population Size and Density for Aural Avian Point Counts

    Directory of Open Access Journals (Sweden)

    Mathew W. Alldredge

    2007-12-01

    Full Text Available The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture-recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence, which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low homogenous rates per interval with those singing at (high and low heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant

  17. Understanding the Effects of Time on Collaborative Learning Processes in Problem Based Learning: A Mixed Methods Study

    Science.gov (United States)

    Hommes, J.; Van den Bossche, P.; de Grave, W.; Bos, G.; Schuwirth, L.; Scherpbier, A.

    2014-01-01

    Little is known how time influences collaborative learning groups in medical education. Therefore a thorough exploration of the development of learning processes over time was undertaken in an undergraduate PBL curriculum over 18 months. A mixed-methods triangulation design was used. First, the quantitative study measured how various learning…

  18. The Multiscale Material Point Method for Simulating Transient Responses

    Science.gov (United States)

    Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas

    2015-06-01

    To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.

  19. Hardware-accelerated Point Generation and Rendering of Point-based Impostors

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    2005-01-01

    This paper presents a novel scheme for generating points from triangle models. The method is fast and lends itself well to implementation using graphics hardware. The triangle to point conversion is done by rendering the models, and the rendering may be performed procedurally or by a black box API....... I describe the technique in detail and discuss how the generated point sets can easily be used as impostors for the original triangle models used to create the points. Since the points reside solely in GPU memory, these impostors are fairly efficient. Source code is available online....

  20. A resilience perspective to water risk management: case-study application of the adaptation tipping point method

    Science.gov (United States)

    Gersonius, Berry; Ashley, Richard; Jeuken, Ad; Nasruddin, Fauzy; Pathirana, Assela; Zevenbergen, Chris

    2010-05-01

    In a context of high uncertainty about hydrological variables due to climate change and other factors, the development of updated risk management approaches is as important as—if not more important than—the provision of improved data and forecasts of the future. Traditional approaches to adaptation attempt to manage future water risks to cities with the use of the predict-then-adapt method. This method uses hydrological change projections as the starting point to identify adaptive strategies, which is followed by analysing the cause-effect chain based on some sort of Pressures-State-Impact-Response (PSIR) scheme. The predict-then-adapt method presumes that it is possible to define a singular (optimal) adaptive strategy according to a most likely or average projection of future change. A key shortcoming of the method is, however, that the planning of water management structures is typically decoupled from forecast uncertainties and is, as such, inherently inflexible. This means that there is an increased risk of under- or over-adaptation, resulting in either mal-functioning or unnecessary costs. Rather than taking a traditional approach, responsible water risk management requires an alternative approach to adaptation that recognises and cultivates resiliency for change. The concept of resiliency relates to the capability of complex socio-technical systems to make aspirational levels of functioning attainable despite the occurrence of possible changes. Focusing on resiliency does not attempt to reduce uncertainty associated with future change, but rather to develop better ways of managing it. This makes it a particularly relevant perspective for adaptation to long-term hydrological change. Although resiliency is becoming more refined as a theory, the application of the concept to water risk management is still in an initial phase. Different methods are used in practice to support the implementation of a resilience-focused approach. Typically these approaches

  1. A novel method of measuring the concentration of anaesthetic vapours using a dew-point hygrometer.

    Science.gov (United States)

    Wilkes, A R; Mapleson, W W; Mecklenburgh, J S

    1994-02-01

    The Antoine equation relates the saturated vapour pressure of a volatile substance, such as an anaesthetic agent, to the temperature. The measurement of the 'dew-point' of a dry gas mixture containing a volatile anaesthetic agent by a dew-point hygrometer permits the determination of the partial pressure of the anaesthetic agent. The accuracy of this technique is limited only by the accuracy of the Antoine coefficients and of the temperature measurement. Comparing measurements by the dew-point method with measurements by refractometry showed systematic discrepancies up to 0.2% and random discrepancies with SDS up to 0.07% concentration in the 1% to 5% range for three volatile anaesthetics. The systematic discrepancies may be due to errors in available data for the vapour pressures and/or the refractive indices of the anaesthetics.

  2. METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS

    Directory of Open Access Journals (Sweden)

    E. V. Dikareva

    2015-01-01

    Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.

  3. A method for computing the stationary points of a function subject to linear equality constraints

    International Nuclear Information System (INIS)

    Uko, U.L.

    1989-09-01

    We give a new method for the numerical calculation of stationary points of a function when it is subject to equality constraints. An application to the solution of linear equations is given, together with a numerical example. (author). 5 refs

  4. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    OpenAIRE

    Wang Hao; Gao Wen; Huang Qingming; Zhao Feng

    2010-01-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...

  5. Prospective comparison of liver stiffness measurements between two point wave elastography methods: Virtual ouch quantification and elastography point quantification

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-09-15

    To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.

  6. Neuromuscular control of the point to point and oscillatory movements of a sagittal arm with the actor-critic reinforcement learning method.

    Science.gov (United States)

    Golkhou, Vahid; Parnianpour, Mohamad; Lucas, Caro

    2005-04-01

    In this study, we have used a single link system with a pair of muscles that are excited with alpha and gamma signals to achieve both point to point and oscillatory movements with variable amplitude and frequency.The system is highly nonlinear in all its physical and physiological attributes. The major physiological characteristics of this system are simultaneous activation of a pair of nonlinear muscle-like-actuators for control purposes, existence of nonlinear spindle-like sensors and Golgi tendon organ-like sensor, actions of gravity and external loading. Transmission delays are included in the afferent and efferent neural paths to account for a more accurate representation of the reflex loops.A reinforcement learning method with an actor-critic (AC) architecture instead of middle and low level of central nervous system (CNS), is used to track a desired trajectory. The actor in this structure is a two layer feedforward neural network and the critic is a model of the cerebellum. The critic is trained by state-action-reward-state-action (SARSA) method. The critic will train the actor by supervisory learning based on the prior experiences. Simulation studies of oscillatory movements based on the proposed algorithm demonstrate excellent tracking capability and after 280 epochs the RMS error for position and velocity profiles were 0.02, 0.04 rad and rad/s, respectively.

  7. Detecting change-points in extremes

    KAUST Repository

    Dupuis, D. J.

    2015-01-01

    Even though most work on change-point estimation focuses on changes in the mean, changes in the variance or in the tail distribution can lead to more extreme events. In this paper, we develop a new method of detecting and estimating the change-points in the tail of multiple time series data. In addition, we adapt existing tail change-point detection methods to our specific problem and conduct a thorough comparison of different methods in terms of performance on the estimation of change-points and computational time. We also examine three locations on the U.S. northeast coast and demonstrate that the methods are useful for identifying changes in seasonally extreme warm temperatures.

  8. An Introduction to the Material Point Method using a Case Study from Gas Dynamics

    International Nuclear Information System (INIS)

    Tran, L. T.; Kim, J.; Berzins, M.

    2008-01-01

    The Material Point Method (MPM) developed by Sulsky and colleagues is currently being used to solve many challenging problems involving large deformations and/or fragementations with considerable success as part of the Uintah code created by the CSAFE project. In order to understand the properties of this method an analysis of the considerable computational properties of MPM is undertaken in the context of model problems from gas dynamics. One aspect of the MPM method in the form used here is shown to have first order accuracy. Computational experiments using particle redistribution are described and show that smooth results with first order accuracy may be obtained.

  9. A method of undifferenced ambiguity resolution for GPS+GLONASS precise point positioning.

    Science.gov (United States)

    Yi, Wenting; Song, Weiwei; Lou, Yidong; Shi, Chuang; Yao, Yibin

    2016-05-25

    Integer ambiguity resolution is critical for achieving positions of high precision and for shortening the convergence time of precise point positioning (PPP). However, GLONASS adopts the signal processing technology of frequency division multiple access and results in inter-frequency code biases (IFCBs), which are currently difficult to correct. This bias makes the methods proposed for GPS ambiguity fixing unsuitable for GLONASS. To realize undifferenced GLONASS ambiguity fixing, we propose an undifferenced ambiguity resolution method for GPS+GLONASS PPP, which considers the IFCBs estimation. The experimental result demonstrates that the success rate of GLONASS ambiguity fixing can reach 75% through the proposed method. Compared with the ambiguity float solutions, the positioning accuracies of ambiguity-fixed solutions of GLONASS-only PPP are increased by 12.2%, 20.9%, and 10.3%, and that of the GPS+GLONASS PPP by 13.0%, 35.2%, and 14.1% in the North, East and Up directions, respectively.

  10. Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method

    Directory of Open Access Journals (Sweden)

    Shobha Rani Depuru

    2018-01-01

    Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.

  11. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  12. A new integrated dual time-point amyloid PET/MRI data analysis method

    International Nuclear Information System (INIS)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara

    2017-01-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age

  13. A new integrated dual time-point amyloid PET/MRI data analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)

    2017-11-15

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between

  14. Indoor Trajectory Tracking Scheme Based on Delaunay Triangulation and Heuristic Information in Wireless Sensor Networks.

    Science.gov (United States)

    Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong

    2017-06-02

    Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.

  15. Simultaneous hierarchical segmentation and vectorization of satellite images through combined data sampling and anisotropic triangulation

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Prasad, Lakshman [Los Alamos National Laboratory; Dillard, Scott [PNNL

    2010-10-21

    The automatic detection, recognition , and segmentation of object classes in remote sensed images is of crucial importance for scene interpretation and understanding. However, it is a difficult task because of the high variability of satellite data. Indeed, the observed scenes usually exhibit a high degree of complexity, where complexity refers to the large variety of pictorial representations of objects with the same semantic meaning and also to the extensive amount of available det.ails. Therefore, there is still a strong demand for robust techniques for automatic information extraction and interpretation of satellite images. In parallel, there is a growing interest in techniques that can extract vector features directly from such imagery. In this paper, we investigate the problem of automatic hierarchical segmentation and vectorization of multispectral satellite images. We propose a new algorithm composed of the following steps: (i) a non-uniform sampling scheme extracting most salient pixels in the image, (ii) an anisotropic triangulation constrained by the sampled pixels taking into account both strength and directionality of local structures present in the image, (iii) a polygonal grouping scheme merging, through techniques based on perceptual information , the obtained segments to a smaller quantity of superior vectorial objects. Besides its computational efficiency, this approach provides a meaningful polygonal representation for subsequent image analysis and/or interpretation.

  16. SINGLE TREE DETECTION FROM AIRBORNE LASER SCANNING DATA USING A MARKED POINT PROCESS BASED METHOD

    Directory of Open Access Journals (Sweden)

    J. Zhang

    2013-05-01

    Full Text Available Tree detection and reconstruction is of great interest in large-scale city modelling. In this paper, we present a marked point process model to detect single trees from airborne laser scanning (ALS data. We consider single trees in ALS recovered canopy height model (CHM as a realization of point process of circles. Unlike traditional marked point process, we sample the model in a constraint configuration space by making use of image process techniques. A Gibbs energy is defined on the model, containing a data term which judge the fitness of the model with respect to the data, and prior term which incorporate the prior knowledge of object layouts. We search the optimal configuration through a steepest gradient descent algorithm. The presented hybrid framework was test on three forest plots and experiments show the effectiveness of the proposed method.

  17. The application of entropy weight TOPSIS method to optimal points in monitoring the Xinjiang radiation environment

    International Nuclear Information System (INIS)

    Feng Guangwen; Hu Youhua; Liu Qian

    2009-01-01

    In this paper, the application of the entropy weight TOPSIS method to optimal layout points in monitoring the Xinjiang radiation environment has been indroduced. With the help of SAS software, It has been found that the method is more ideal and feasible. The method can provide a reference for us to monitor radiation environment in the same regions further. As the method could bring great convenience and greatly reduce the inspecting work, it is very simple, flexible and effective for a comprehensive evaluation. (authors)

  18. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  19. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  20. Effect of DEM resolution on rainfall-triggered landslide modeling within a triangulated network-based model. A case study in the Luquillo Forest, Puerto Rico

    Science.gov (United States)

    Arnone, E.; Dialynas, Y. G.; Noto, L. V.; Bras, R. L.

    2013-12-01

    Catchment slope distribution is one of the topographic characteristics that significantly control rainfall-triggered landslide modeling, in both direct and indirect ways. Slope directly determines the soil volume associated with instability. Indirectly slope also affects the subsurface lateral redistribution of soil moisture across the basin, which in turn determines the water pore pressure conditions that impact slope stability. In this study, we investigate the influence of DEM resolution on slope stability and the slope stability analysis by using a distributed eco-hydrological and landslide model, the tRIBS-VEGGIE (Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator - VEGetation Generator for Interactive Evolution). The model implements a triangulated irregular network to describe the topography, and it is capable of evaluating vegetation dynamics and predicting shallow landslides triggered by rainfall. The impact of DEM resolution on the landslide prediction was studied using five TINs derived from five grid DEMs at different resolutions, i.e. 10, 20, 30, 50 and 70 m respectively. The analysis was carried out on the Mameyes Basin, located in the Luquillo Experimental Forest in Puerto Rico, where previous landslide analyses have been carried out. Results showed that the use of the irregular mesh reduced the loss of accuracy in the derived slope distribution when coarser resolutions were used. The impact of the different resolutions on soil moisture patterns was important only when the lateral redistribution was considerable, depending on hydrological properties and rainfall forcing. In some cases, the use of different DEM resolutions did not significantly affect tRIBS-VEGGIE landslide output, in terms of landslide locations, and values of slope and soil moisture at failure.