WorldWideScience

Sample records for geometric build-up algorithm

  1. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  2. Fast decoding algorithms for geometric coded apertures

    International Nuclear Information System (INIS)

    Byard, Kevin

    2015-01-01

    Fast decoding algorithms are described for the class of coded aperture designs known as geometric coded apertures which were introduced by Gourlay and Stephen. When compared to the direct decoding method, the algorithms significantly reduce the number of calculations required when performing the decoding for these apertures and hence speed up the decoding process. Experimental tests confirm the efficacy of these fast algorithms, demonstrating a speed up of approximately two to three orders of magnitude over direct decoding.

  3. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  4. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  5. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  6. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  7. Development of algorithms for building inventory compilation through remote sensing and statistical inferencing

    Science.gov (United States)

    Sarabandi, Pooya

    Building inventories are one of the core components of disaster vulnerability and loss estimations models, and as such, play a key role in providing decision support for risk assessment, disaster management and emergency response efforts. In may parts of the world inclusive building inventories, suitable for the use in catastrophe models cannot be found. Furthermore, there are serious shortcomings in the existing building inventories that include incomplete or out-dated information on critical attributes as well as missing or erroneous values for attributes. In this dissertation a set of methodologies for updating spatial and geometric information of buildings from single and multiple high-resolution optical satellite images are presented. Basic concepts, terminologies and fundamentals of 3-D terrain modeling from satellite images are first introduced. Different sensor projection models are then presented and sources of optical noise such as lens distortions are discussed. An algorithm for extracting height and creating 3-D building models from a single high-resolution satellite image is formulated. The proposed algorithm is a semi-automated supervised method capable of extracting attributes such as longitude, latitude, height, square footage, perimeter, irregularity index and etc. The associated errors due to the interactive nature of the algorithm are quantified and solutions for minimizing the human-induced errors are proposed. The height extraction algorithm is validated against independent survey data and results are presented. The validation results show that an average height modeling accuracy of 1.5% can be achieved using this algorithm. Furthermore, concept of cross-sensor data fusion for the purpose of 3-D scene reconstruction using quasi-stereo images is developed in this dissertation. The developed algorithm utilizes two or more single satellite images acquired from different sensors and provides the means to construct 3-D building models in a more

  8. Area collapse algorithm computing new curve of 2D geometric objects

    Science.gov (United States)

    Buczek, Michał Mateusz

    2017-06-01

    The processing of cartographic data demands human involvement. Up-to-date algorithms try to automate a part of this process. The goal is to obtain a digital model, or additional information about shape and topology of input geometric objects. A topological skeleton is one of the most important tools in the branch of science called shape analysis. It represents topological and geometrical characteristics of input data. Its plot depends on using algorithms such as medial axis, skeletonization, erosion, thinning, area collapse and many others. Area collapse, also known as dimension change, replaces input data with lower-dimensional geometric objects like, for example, a polygon with a polygonal chain, a line segment with a point. The goal of this paper is to introduce a new algorithm for the automatic calculation of polygonal chains representing a 2D polygon. The output is entirely contained within the area of the input polygon, and it has a linear plot without branches. The computational process is automatic and repeatable. The requirements of input data are discussed. The author analyzes results based on the method of computing ends of output polygonal chains. Additional methods to improve results are explored. The algorithm was tested on real-world cartographic data received from BDOT/GESUT databases, and on point clouds from laser scanning. An implementation for computing hatching of embankment is described.

  9. Energy absorption build-up factors in teeth

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Rudraswamy, B.

    2012-01-01

    Geometric progression fitting method has been used to compute energy absorption build-up factor of teeth [enamel outer surface, enamel middle, enamel dentin junction towards enamel, enamel dentin junction towards dentin, dentin middle and dentin inner surface] for wide energy range (0.015-15 MeV) up to the penetration depth of 40 mean free path. The dependence of energy absorption build-up factor on incident photon energy, penetration depth, electron density and effective atomic number has also been studied. The energy absorption build-up factors increases with the penetration depth and electron density of teeth. So that the degree of violation of Lambert-Beer (I = I 0 e -μt ) law is less for least penetration depth and electron density. The energy absorption build-up factors for different regions of teeth are not same hence the energy absorbed by the different regions of teeth is not uniform which depends on the composition of the medium. The relative dose of gamma in different regions of teeth is also estimated. Dosimetric implication of energy absorption build-up factor in teeth has also been discussed. The estimated absorption build up factors in different regions of teeth may be useful in the electron spin resonance dosimetry. (author)

  10. Structure-preserving geometric algorithms for plasma physics and beam physics

    Science.gov (United States)

    Qin, Hong

    2017-10-01

    Standard algorithms in the plasma physics and beam physics do not possess the long-term accuracy and fidelity required in the study of multi-scale dynamics, because they do not preserve the geometric structures of the physical systems, such as the local energy-momentum conservation, symplectic structure and gauge symmetry. As a result, numerical errors accumulate coherently with time and long-term simulation results are not reliable. To overcome this difficulty, since 2008 structure-preserving geometric algorithms have been developed. This new generation of algorithms utilizes advanced techniques, such as interpolating differential forms, canonical and non-canonical symplectic integrators, and finite element exterior calculus to guarantee gauge symmetry and charge conservation, and the conservation of energy-momentum and symplectic structure. It is our vision that future numerical capabilities in plasma physics and beam physics will be based on the structure-preserving geometric algorithms.

  11. Research on Geometric Positioning Algorithm of License Plate in Multidimensional Parameter Space

    Directory of Open Access Journals (Sweden)

    Yinhua Huan

    2014-05-01

    Full Text Available Considering features of vehicle license plate location method which commonly used, in order to search a consistent location for reference images with license plates feature in multidimensional parameter space, a new algorithm of geometric location is proposed. Geometric location algorithm main include model training and real time search. Which not only adapt the gray-scale linearity and the gray non-linear changes, but also support changes of scale and angle. Compared with the mainstream locating software, numerical results shows under the same test conditions that the position deviation of geometric positioning algorithm is less than 0.5 pixel. Without taking into account the multidimensional parameter space, Geometric positioning algorithm position deviation is less than 1.0 pixel and angle deviation is less than 1.0 degree taking into account the multidimensional parameter space. This algorithm is robust, simple, practical and is better than the traditional method.

  12. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    Science.gov (United States)

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  13. Voluminator 2.0 - Speeding up the Approximation of the Volume of Defective 3d Building Models

    Science.gov (United States)

    Sindram, M.; Machl, T.; Steuer, H.; Pültz, M.; Kolbe, T. H.

    2016-06-01

    Semantic 3D city models are increasingly used as a data source in planning and analyzing processes of cities. They represent a virtual copy of the reality and are a common information base and source of information for examining urban questions. A significant advantage of virtual city models is that important indicators such as the volume of buildings, topological relationships between objects and other geometric as well as thematic information can be derived. Knowledge about the exact building volume is an essential base for estimating the building energy demand. In order to determine the volume of buildings with conventional algorithms and tools, the buildings may not contain any topological and geometrical errors. The reality, however, shows that city models very often contain errors such as missing surfaces, duplicated faces and misclosures. To overcome these errors (Steuer et al., 2015) have presented a robust method for approximating the volume of building models. For this purpose, a bounding box of the building is divided into a regular grid of voxels and it is determined which voxels are inside the building. The regular arrangement of the voxels leads to a high number of topological tests and prevents the application of this method using very high resolutions. In this paper we present an extension of the algorithm using an octree approach limiting the subdivision of space to regions around surfaces of the building models and to regions where, in the case of defective models, the topological tests are inconclusive. We show that the computation time can be significantly reduced, while preserving the robustness against geometrical and topological errors.

  14. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  15. A Feed-forward Geometrical Compensation and Adaptive Feedback Control Algorithm for Hydraulic Robot Manipulators

    DEFF Research Database (Denmark)

    Conrad, Finn; Zhou, Jianjun; Gabacik, Andrzej

    1998-01-01

    Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control.......Invited paper presents a new control algorithm based on feed-forward geometrical compensation strategy combined with adaptive feedback control....

  16. A generic algorithm for constructing hierarchical representations of geometric objects

    International Nuclear Information System (INIS)

    Xavier, P.G.

    1995-01-01

    For a number of years, robotics researchers have exploited hierarchical representations of geometrical objects and scenes in motion-planning, collision-avoidance, and simulation. However, few general techniques exist for automatically constructing them. We present a generic, bottom-up algorithm that uses a heuristic clustering technique to produced balanced, coherent hierarchies. Its worst-case running time is O(N 2 logN), but for non-pathological cases it is O(NlogN), where N is the number of input primitives. We have completed a preliminary C++ implementation for input collections of 3D convex polygons and 3D convex polyhedra and conducted simple experiments with scenes of up to 12,000 polygons, which take only a few minutes to process. We present examples using spheres and convex hulls as hierarchy primitives

  17. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    Science.gov (United States)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  18. Parallel algorithms for geometric connected component labeling on a hypercube multiprocessor

    Science.gov (United States)

    Belkhale, K. P.; Banerjee, P.

    1992-01-01

    Different algorithms for the geometric connected component labeling (GCCL) problem are defined each of which involves d stages of message passing, for a d-dimensional hypercube. The major idea is that in each stage a hypercube multiprocessor increases its knowledge of domain. The algorithms under consideration include the QUAD algorithm for small number of processors and the Overlap Quad algorithm for large number of processors, subject to the locality of the connected sets. These algorithms differ in their run time, memory requirements, and message complexity. They were implemented on an Intel iPSC2/D4/MX hypercube.

  19. Parallel Algorithm of Geometrical Hashing Based on NumPy Package and Processes Pool

    Directory of Open Access Journals (Sweden)

    Klyachin Vladimir Aleksandrovich

    2015-10-01

    Full Text Available The article considers the problem of multi-dimensional geometric hashing. The paper describes a mathematical model of geometric hashing and considers an example of its use in localization problems for the point. A method of constructing the corresponding hash matrix by parallel algorithm is considered. In this paper an algorithm of parallel geometric hashing using a development pattern «pool processes» is proposed. The implementation of the algorithm is executed using the Python programming language and NumPy package for manipulating multidimensional data. To implement the process pool it is proposed to use a class Process Pool Executor imported from module concurrent.futures, which is included in the distribution of the interpreter Python since version 3.2. All the solutions are presented in the paper by corresponding UML class diagrams. Designed GeomNash package includes classes Data, Result, GeomHash, Job. The results of the developed program presents the corresponding graphs. Also, the article presents the theoretical justification for the application process pool for the implementation of parallel algorithms. It is obtained condition t2 > (p/(p-1*t1 of the appropriateness of process pool. Here t1 - the time of transmission unit of data between processes, and t2 - the time of processing unit data by one processor.

  20. Geometric Semantic Genetic Programming Algorithm and Slump Prediction

    OpenAIRE

    Xu, Juncai; Shen, Zhenzhong; Ren, Qingwen; Xie, Xin; Yang, Zhengyu

    2017-01-01

    Research on the performance of recycled concrete as building material in the current world is an important subject. Given the complex composition of recycled concrete, conventional methods for forecasting slump scarcely obtain satisfactory results. Based on theory of nonlinear prediction method, we propose a recycled concrete slump prediction model based on geometric semantic genetic programming (GSGP) and combined it with recycled concrete features. Tests show that the model can accurately p...

  1. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  2. A Prefiltered Cuckoo Search Algorithm with Geometric Operators for Solving Sudoku Problems

    Directory of Open Access Journals (Sweden)

    Ricardo Soto

    2014-01-01

    Full Text Available The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9×9 grid, divided into nine 3×3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature.

  3. High Resolution SAR Imaging Employing Geometric Features for Extracting Seismic Damage of Buildings

    Science.gov (United States)

    Cui, L. P.; Wang, X. P.; Dou, A. X.; Ding, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) image is relatively easy to acquire but difficult for interpretation. This paper probes how to identify seismic damage of building using geometric features of SAR. The SAR imaging geometric features of buildings, such as the high intensity layover, bright line induced by double bounce backscattering and dark shadow is analysed, and show obvious differences texture features of homogeneity, similarity and entropy in combinatorial imaging geometric regions between the un-collapsed and collapsed buildings in airborne SAR images acquired in Yushu city damaged by 2010 Ms7.1 Yushu, Qinghai, China earthquake, which implicates a potential capability to discriminate collapsed and un-collapsed buildings from SAR image. Study also shows that the proportion of highlight (layover & bright line) area (HA) is related to the seismic damage degree, thus a SAR image damage index (SARDI), which related to the ratio of HA to the building occupation are of building in a street block (SA), is proposed. While HA is identified through feature extraction with high-pass and low-pass filtering of SAR image in frequency domain. A partial region with 58 natural street blocks in the Yushu City are selected as study area. Then according to the above method, HA is extracted, SARDI is then calculated and further classified into 3 classes. The results show effective through validation check with seismic damage classes interpreted artificially from post-earthquake airborne high resolution optical image, which shows total classification accuracy 89.3 %, Kappa coefficient 0.79 and identical to the practical seismic damage distribution. The results are also compared and discussed with the building damage identified from SAR image available by other authors.

  4. Development and Evaluation of Algorithms to Improve Small- and Medium-Size Commercial Building Operations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woohyun [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lutes, Robert G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Underhill, Ronald M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using

  5. Bottom-up GGM algorithm for constructing multiple layered hierarchical gene regulatory networks

    Science.gov (United States)

    Multilayered hierarchical gene regulatory networks (ML-hGRNs) are very important for understanding genetics regulation of biological pathways. However, there are currently no computational algorithms available for directly building ML-hGRNs that regulate biological pathways. A bottom-up graphic Gaus...

  6. Geometric Distribution-Based Readers Scheduling Optimization Algorithm Using Artificial Immune System

    Directory of Open Access Journals (Sweden)

    Litian Duan

    2016-11-01

    Full Text Available In the multiple-reader environment (MRE of radio frequency identification (RFID system, multiple readers are often scheduled to interrogate the randomized tags via operating at different time slots or frequency channels to decrease the signal interferences. Based on this, a Geometric Distribution-based Multiple-reader Scheduling Optimization Algorithm using Artificial Immune System (GD-MRSOA-AIS is proposed to fairly and optimally schedule the readers operating from the viewpoint of resource allocations. GD-MRSOA-AIS is composed of two parts, where a geometric distribution function combined with the fairness consideration is first introduced to generate the feasible scheduling schemes for reader operation. After that, artificial immune system (including immune clone, immune mutation and immune suppression quickly optimize these feasible ones as the optimal scheduling scheme to ensure that readers are fairly operating with larger effective interrogation range and lower interferences. Compared with the state-of-the-art algorithm, the simulation results indicate that GD-MRSOA-AIS could efficiently schedules the multiple readers operating with a fairer resource allocation scheme, performing in larger effective interrogation range.

  7. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  8. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  9. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  10. Influence of buildings geometrical and physical parameters on thermal cooling load

    International Nuclear Information System (INIS)

    Melo, C.

    1980-09-01

    A more accurate method to evaluate the thermal cooling load in buildings and to analyze the influence of geometrical and physical parameters on air conditioning calculations is presented. The sensitivity of the cooling load, considering the thermal capacity of the materials, was simulated in a computer for several different situations. (Author) [pt

  11. Geometric properties of nucleic acids with potential for autobuilding

    International Nuclear Information System (INIS)

    Gruene, Tim; Sheldrick, George M.

    2011-01-01

    Algorithms and geometrical properties are described for the automated building of nucleic acids in experimental electron density. Medium- to high-resolution X-ray structures of DNA and RNA molecules were investigated to find geometric properties useful for automated model building in crystallographic electron-density maps. We describe a simple method, starting from a list of electron-density ‘blobs’, for identifying backbone phosphates and nucleic acid bases based on properties of the local electron-density distribution. This knowledge should be useful for the automated building of nucleic acid models into electron-density maps. We show that the distances and angles involving C1′ and the P atoms, using the pseudo-torsion angles η' and θ' that describe the …P—C1′—P—C1′… chain, provide a promising basis for building the nucleic acid polymer. These quantities show reasonably narrow distributions with asymmetry that should allow the direction of the phosphate backbone to be established

  12. Performance indices and evaluation of algorithms in building energy efficient design optimization

    International Nuclear Information System (INIS)

    Si, Binghui; Tian, Zhichao; Jin, Xing; Zhou, Xin; Tang, Peng; Shi, Xing

    2016-01-01

    Building energy efficient design optimization is an emerging technique that is increasingly being used to design buildings with better overall performance and a particular emphasis on energy efficiency. To achieve building energy efficient design optimization, algorithms are vital to generate new designs and thus drive the design optimization process. Therefore, the performance of algorithms is crucial to achieving effective energy efficient design techniques. This study evaluates algorithms used for building energy efficient design optimization. A set of performance indices, namely, stability, robustness, validity, speed, coverage, and locality, is proposed to evaluate the overall performance of algorithms. A benchmark building and a design optimization problem are also developed. Hooke–Jeeves algorithm, Multi-Objective Genetic Algorithm II, and Multi-Objective Particle Swarm Optimization algorithm are evaluated by using the proposed performance indices and benchmark design problem. Results indicate that no algorithm performs best in all six areas. Therefore, when facing an energy efficient design problem, the algorithm must be carefully selected based on the nature of the problem and the performance indices that matter the most. - Highlights: • Six indices of algorithm performance in building energy optimization are developed. • For each index, its concept is defined and the calculation formulas are proposed. • A benchmark building and benchmark energy efficient design problem are proposed. • The performance of three selected algorithms are evaluated.

  13. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  14. Metaheuristic algorithms for building Covering Arrays: A review

    Directory of Open Access Journals (Sweden)

    Jimena Adriana Timaná-Peña

    2016-09-01

    Full Text Available Covering Arrays (CA are mathematical objects used in the functional testing of software components. They enable the testing of all interactions of a given size of input parameters in a procedure, function, or logical unit in general, using the minimum number of test cases. Building CA is a complex task (NP-complete problem that involves lengthy execution times and high computational loads. The most effective methods for building CAs are algebraic, Greedy, and metaheuristic-based. The latter have reported the best results to date. This paper presents a description of the major contributions made by a selection of different metaheuristics, including simulated annealing, tabu search, genetic algorithms, ant colony algorithms, particle swarm algorithms, and harmony search algorithms. It is worth noting that simulated annealing-based algorithms have evolved as the most competitive, and currently form the state of the art.

  15. Geometric Rationalization for Freeform Architecture

    KAUST Repository

    Jiang, Caigui

    2016-06-20

    The emergence of freeform architecture provides interesting geometric challenges with regards to the design and manufacturing of large-scale structures. To design these architectural structures, we have to consider two types of constraints. First, aesthetic constraints are important because the buildings have to be visually impressive. Sec- ond, functional constraints are important for the performance of a building and its e cient construction. This thesis contributes to the area of architectural geometry. Specifically, we are interested in the geometric rationalization of freeform architec- ture with the goal of combining aesthetic and functional constraints and construction requirements. Aesthetic requirements typically come from designers and architects. To obtain visually pleasing structures, they favor smoothness of the building shape, but also smoothness of the visible patterns on the surface. Functional requirements typically come from the engineers involved in the construction process. For exam- ple, covering freeform structures using planar panels is much cheaper than using non-planar ones. Further, constructed buildings have to be stable and should not collapse. In this thesis, we explore the geometric rationalization of freeform archi- tecture using four specific example problems inspired by real life applications. We achieve our results by developing optimization algorithms and a theoretical study of the underlying geometrical structure of the problems. The four example problems are the following: (1) The design of shading and lighting systems which are torsion-free structures with planar beams based on quad meshes. They satisfy the functionality requirements of preventing light from going inside a building as shad- ing systems or reflecting light into a building as lighting systems. (2) The Design of freeform honeycomb structures that are constructed based on hex-dominant meshes with a planar beam mounted along each edge. The beams intersect without

  16. Numerical nonlinear complex geometrical optics algorithm for the 3D Calderón problem

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Knudsen, Kim

    2014-01-01

    to the generalized Laplace equation. The 3D problem was solved in theory in late 1980s using complex geometrical optics solutions and a scattering transform. Several approximations to the reconstruction method have been suggested and implemented numerically in the literature, but here, for the first time, a complete...... computer implementation of the full nonlinear algorithm is given. First a boundary integral equation is solved by a Nystrom method for the traces of the complex geometrical optics solutions, second the scattering transform is computed and inverted using fast Fourier transform, and finally a boundary value...

  17. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Science.gov (United States)

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  18. An improved geometric algorithm for calculating the topology of lattice gauge fields

    International Nuclear Information System (INIS)

    Pugh, D.J.R.; Teper, M.; Oxford Univ.

    1989-01-01

    We implement the algorithm of Phillips and Stone on a hypercubic, periodic lattice and show that at currently accessible couplings the SU(2) topological charge so calculated is dominated by short-distance fluctuations. We propose and test an improvement to rid the measure of such lattice artifacts. We find that the improved algorithm produces a topological susceptibility that is consistent with that obtained by the alternative cooling method, thus resolving the controversial discrepancy between geometric and cooling methods. We briefly discuss the reasons for this and point out that our improvement is likely to be particularly effective when applied to the case of SU(3). (orig.)

  19. A MATCHING METHOD TO REDUCE THE INFLUENCE OF SAR GEOMETRIC DEFORMATION

    Directory of Open Access Journals (Sweden)

    C. Gao

    2018-04-01

    Full Text Available There are large geometrical deformations in SAR image, including foreshortening, layover, shade,which leads to SAR Image matching with low accuracy. Especially in complex terrain area, the control points are difficult to obtain, and the matching is difficult to achieve. Considering the impact of geometric distortions in SAR image pairs, a matching algorithm with a combination of speeded up robust features (SURF and summed of normalize cross correlation (SNCC was proposed, which can avoid the influence of SAR geometric deformation. Firstly, SURF algorithm was utilized to predict the search area. Then the matching point pairs was selected based on summed of normalized cross correlation. Finally, false match points were eliminated by the bidirectional consistency. SURF algorithm can control the range of matching points, and the matching points extracted from the deformation area are eliminated, and the matching points with stable and even distribution are obtained. The experimental results demonstrated that the proposed algorithm had high precision, and can effectively avoid the effect of geometric distortion on SAR image matching. Meet accuracy requirements of the block adjustment with sparse control points.

  20. PyECLOUD and build-up simulations at CERN

    International Nuclear Information System (INIS)

    Iadarola, G; Rumolo, G

    2013-01-01

    PyECLOUD is a newly developed code for the simulation of the electron cloud (EC) build-up in particle accelerators. Almost entirely written in Python, it is mostly based on the physical models already used in the ECLOUD code but, thanks to the implementation of new optimized algorithms, it exhibits a significantly improved performance in accuracy, speed, reliability and flexibility. Such new features of PyECLOUD have been already broadly exploited to study EC observations in the Large Hadron Collider (LHC) and its injector chain as well as for the extrapolation to high luminosity upgrade scenarios. (author)

  1. Genetic algorithm for building envelope calibration

    International Nuclear Information System (INIS)

    Ramos Ruiz, Germán; Fernández Bandera, Carlos; Gómez-Acebo Temes, Tomás; Sánchez-Ostiz Gutierrez, Ana

    2016-01-01

    Highlights: • Calibration methodology using Multi-Objective Genetic Algorithm (NSGA-II). • Uncertainty analysis formulas implemented directly in EnergyPlus. • The methodology captures the heat dynamic of the building with a high level of accuracy. • Reduction in the number of parameters involved due to sensitivity analysis. • Cost-effective methodology using temperature sensors only. - Abstract: Buildings today represent 40% of world primary energy consumption and 24% of greenhouse gas emissions. In our society there is growing interest in knowing precisely when and how energy consumption occurs. This means that consumption measurement and verification plans are well-advanced. International agencies such as Efficiency Valuation Organization (EVO) and International Performance Measurement and Verification Protocol (IPMVP) have developed methodologies to quantify savings. This paper presents a methodology to accurately perform automated envelope calibration under option D (calibrated simulation) of IPMVP – vol. 1. This is frequently ignored because of its complexity, despite being more flexible and accurate in assessing the energy performance of a building. A detailed baseline energy model is used, and by means of a metaheuristic technique achieves a highly reliable and accurate Building Energy Simulation (BES) model suitable for detailed analysis of saving strategies. In order to find this BES model a Genetic Algorithm (NSGA-II) is used, together with a highly efficient engine to stimulate the objective, thus permitting rapid achievement of the goal. The result is a BES model that broadly captures the heat dynamic behaviour of the building. The model amply fulfils the parameters demanded by ASHRAE and EVO under option D.

  2. Canonical symplectic structure and structure-preserving geometric algorithms for Schrödinger-Maxwell systems

    Science.gov (United States)

    Chen, Qiang; Qin, Hong; Liu, Jian; Xiao, Jianyuan; Zhang, Ruili; He, Yang; Wang, Yulei

    2017-11-01

    An infinite dimensional canonical symplectic structure and structure-preserving geometric algorithms are developed for the photon-matter interactions described by the Schrödinger-Maxwell equations. The algorithms preserve the symplectic structure of the system and the unitary nature of the wavefunctions, and bound the energy error of the simulation for all time-steps. This new numerical capability enables us to carry out first-principle based simulation study of important photon-matter interactions, such as the high harmonic generation and stabilization of ionization, with long-term accuracy and fidelity.

  3. Elfin: An algorithm for the computational design of custom three-dimensional structures from modular repeat protein building blocks.

    Science.gov (United States)

    Yeh, Chun-Ting; Brunette, T J; Baker, David; McIntosh-Smith, Simon; Parmeggiani, Fabio

    2018-02-01

    Computational protein design methods have enabled the design of novel protein structures, but they are often still limited to small proteins and symmetric systems. To expand the size of designable proteins while controlling the overall structure, we developed Elfin, a genetic algorithm for the design of novel proteins with custom shapes using structural building blocks derived from experimentally verified repeat proteins. By combining building blocks with compatible interfaces, it is possible to rapidly build non-symmetric large structures (>1000 amino acids) that match three-dimensional geometric descriptions provided by the user. A run time of about 20min on a laptop computer for a 3000 amino acid structure makes Elfin accessible to users with limited computational resources. Protein structures with controlled geometry will allow the systematic study of the effect of spatial arrangement of enzymes and signaling molecules, and provide new scaffolds for functional nanomaterials. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. A novel rotational matrix and translation vector algorithm: geometric accuracy for augmented reality in oral and maxillofacial surgeries.

    Science.gov (United States)

    Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C

    2018-06-01

    Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Matching Real and Synthetic Panoramic Images Using a Variant of Geometric Hashing

    Science.gov (United States)

    Li-Chee-Ming, J.; Armenakis, C.

    2017-05-01

    This work demonstrates an approach to automatically initialize a visual model-based tracker, and recover from lost tracking, without prior camera pose information. These approaches are commonly referred to as tracking-by-detection. Previous tracking-by-detection techniques used either fiducials (i.e. landmarks or markers) or the object's texture. The main contribution of this work is the development of a tracking-by-detection algorithm that is based solely on natural geometric features. A variant of geometric hashing, a model-to-image registration algorithm, is proposed that searches for a matching panoramic image from a database of synthetic panoramic images captured in a 3D virtual environment. The approach identifies corresponding features between the matched panoramic images. The corresponding features are to be used in a photogrammetric space resection to estimate the camera pose. The experiments apply this algorithm to initialize a model-based tracker in an indoor environment using the 3D CAD model of the building.

  6. Geometrical features assessment of liver's tumor with application of artificial neural network evolved by imperialist competitive algorithm.

    Science.gov (United States)

    Keshavarz, M; Mojra, A

    2015-05-01

    Geometrical features of a cancerous tumor embedded in biological soft tissue, including tumor size and depth, are a necessity in the follow-up procedure and making suitable therapeutic decisions. In this paper, a new socio-politically motivated global search strategy which is called imperialist competitive algorithm (ICA) is implemented to train a feed forward neural network (FFNN) to estimate the tumor's geometrical characteristics (FFNNICA). First, a viscoelastic model of liver tissue is constructed by using a series of in vitro uniaxial and relaxation test data. Then, 163 samples of the tissue including a tumor with different depths and diameters are generated by making use of PYTHON programming to link the ABAQUS and MATLAB together. Next, the samples are divided into 123 samples as training dataset and 40 samples as testing dataset. Training inputs of the network are mechanical parameters extracted from palpation of the tissue through a developing noninvasive technology called artificial tactile sensing (ATS). Last, to evaluate the FFNNICA performance, outputs of the network including tumor's depth and diameter are compared with desired values for both training and testing datasets. Deviations of the outputs from desired values are calculated by a regression analysis. Statistical analysis is also performed by measuring Root Mean Square Error (RMSE) and Efficiency (E). RMSE in diameter and depth estimations are 0.50 mm and 1.49, respectively, for the testing dataset. Results affirm that the proposed optimization algorithm for training neural network can be useful to characterize soft tissue tumors accurately by employing an artificial palpation approach. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Critical review of the building downwash algorithms in AERMOD.

    Science.gov (United States)

    Petersen, Ron L; Guerra, Sergio A; Bova, Anthony S

    2017-08-01

    The only documentation on the building downwash algorithm in AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model), referred to as PRIME (Plume Rise Model Enhancements), is found in the 2000 A&WMA journal article by Schulman, Strimaitis and Scire. Recent field and wind tunnel studies have shown that AERMOD can overpredict concentrations by factors of 2 to 8 for certain building configurations. While a wind tunnel equivalent building dimension study (EBD) can be conducted to approximately correct the overprediction bias, past field and wind tunnel studies indicate that there are notable flaws in the PRIME building downwash theory. A detailed review of the theory supported by CFD (Computational Fluid Dynamics) and wind tunnel simulations of flow over simple rectangular buildings revealed the following serious theoretical flaws: enhanced turbulence in the building wake starting at the wrong longitudinal location; constant enhanced turbulence extending up to the wake height; constant initial enhanced turbulence in the building wake (does not vary with roughness or stability); discontinuities in the streamline calculations; and no method to account for streamlined or porous structures. This paper documents theoretical and other problems in PRIME along with CFD simulations and wind tunnel observations that support these findings. Although AERMOD/PRIME may provide accurate and unbiased estimates (within a factor of 2) for some building configurations, a major review and update is needed so that accurate estimates can be obtained for other building configurations where significant overpredictions or underpredictions are common due to downwash effects. This will ensure that regulatory evaluations subject to dispersion modeling requirements can be based on an accurate model. Thus, it is imperative that the downwash theory in PRIME is corrected to improve model performance and ensure that the model better represents reality.

  8. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  9. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  10. Quantum algorithms for topological and geometric analysis of data

    Science.gov (United States)

    Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo

    2016-01-01

    Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491

  11. Tenth workshop on the algorithmic foundations of robotics (WAFR)

    CERN Document Server

    Lozano-Perez, Tomas; Roy, Nicholas; Rus, Daniela; Algorithmic foundations of robotics X

    2013-01-01

    Algorithms are a fundamental component of robotic systems. Robot algorithms process inputs from sensors that provide noisy and partial data, build geometric and physical models of the world, plan high-and low-level actions at different time horizons, and execute these actions on actuators with limited precision. The design and analysis of robot algorithms raise a unique combination of questions from many elds, including control theory, computational geometry and topology, geometrical and physical modeling, reasoning under uncertainty, probabilistic algorithms, game theory, and theoretical computer science. The Workshop on Algorithmic Foundations of Robotics (WAFR) is a single-track meeting of leading researchers in the eld of robot algorithms. Since its inception in 1994, WAFR has been held every other year, and has provided one of the premiere venues for the publication of some of the eld's most important and lasting contributions. This books contains the proceedings of the tenth WAFR, held on June 13{15 201...

  12. Building block method: a bottom-up modular synthesis methodology for distributed compliant mechanisms

    Directory of Open Access Journals (Sweden)

    G. Krishnan

    2012-03-01

    Full Text Available Synthesizing topologies of compliant mechanisms are based on rigid-link kinematic designs or completely automated optimization techniques. These designs yield mechanisms that match the kinematic specifications as a whole, but seldom yield user insight on how each constituent member contributes towards the overall mechanism performance. This paper reviews recent developments in building block based design of compliant mechanisms. A key aspect of such a methodology is formulating a representation of compliance at a (i single unique point of interest in terms of geometric quantities such as ellipses and vectors, and (ii relative compliance between distinct input(s and output(s in terms of load flow. This geometric representation provides a direct mapping between the mechanism geometry and their behavior, and is used to characterize simple deformable members that form a library of building blocks. The design space spanned by the building block library guides the decomposition of a given problem specification into tractable sub-problems that can be each solved from an entry in the library. The effectiveness of this geometric representation aids user insight in design, and enables discovery of trends and guidelines to obtain practical conceptual designs.

  13. Using Genetic Algorithms for Building Metrics of Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Cristian CIUREA

    2011-01-01

    Full Text Available he paper objective is to reveal the importance of genetic algorithms in building robust metrics of collaborative systems. The main types of collaborative systems in economy are presented and some characteristics of genetic algorithms are described. A genetic algorithm was implemented in order to determine the local maximum and minimum points of the relative complexity function associated to a collaborative banking system. The intelligent collaborative systems based on genetic algorithms, representing the new generation of collaborative systems, are analyzed and the implementation of auto-adaptive interfaces in a banking application is described.

  14. MATCHING AERIAL IMAGES TO 3D BUILDING MODELS BASED ON CONTEXT-BASED GEOMETRIC HASHING

    Directory of Open Access Journals (Sweden)

    J. Jung

    2016-06-01

    Full Text Available In this paper, a new model-to-image framework to automatically align a single airborne image with existing 3D building models using geometric hashing is proposed. As a prerequisite process for various applications such as data fusion, object tracking, change detection and texture mapping, the proposed registration method is used for determining accurate exterior orientation parameters (EOPs of a single image. This model-to-image matching process consists of three steps: 1 feature extraction, 2 similarity measure and matching, and 3 adjustment of EOPs of a single image. For feature extraction, we proposed two types of matching cues, edged corner points representing the saliency of building corner points with associated edges and contextual relations among the edged corner points within an individual roof. These matching features are extracted from both 3D building and a single airborne image. A set of matched corners are found with given proximity measure through geometric hashing and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on co-linearity equations. The result shows that acceptable accuracy of single image's EOP can be achievable by the proposed registration approach as an alternative to labour-intensive manual registration process.

  15. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  16. A Color Image Watermarking Scheme Resistant against Geometrical Attacks

    Directory of Open Access Journals (Sweden)

    Y. Xing

    2010-04-01

    Full Text Available The geometrical attacks are still a problem for many digital watermarking algorithms at present. In this paper, we propose a watermarking algorithm for color images resistant to geometrical distortions (rotation and scaling. The singular value decomposition is used for watermark embedding and extraction. The log-polar map- ping (LPM and phase correlation method are used to register the position of geometrical distortion suffered by the watermarked image. Experiments with different kinds of color images and watermarks demonstrate that the watermarking algorithm is robust to common image processing attacks, especially geometrical attacks.

  17. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  18. Geometric reconstruction methods for electron tomography

    DEFF Research Database (Denmark)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts...... and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed...

  19. Scan-To Output Validation: Towards a Standardized Geometric Quality Assessment of Building Information Models Based on Point Clouds

    Science.gov (United States)

    Bonduel, M.; Bassier, M.; Vergauwen, M.; Pauwels, P.; Klein, R.

    2017-11-01

    The use of Building Information Modeling (BIM) for existing buildings based on point clouds is increasing. Standardized geometric quality assessment of the BIMs is needed to make them more reliable and thus reusable for future users. First, available literature on the subject is studied. Next, an initial proposal for a standardized geometric quality assessment is presented. Finally, this method is tested and evaluated with a case study. The number of specifications on BIM relating to existing buildings is limited. The Levels of Accuracy (LOA) specification of the USIBD provides definitions and suggestions regarding geometric model accuracy, but lacks a standardized assessment method. A deviation analysis is found to be dependent on (1) the used mathematical model, (2) the density of the point clouds and (3) the order of comparison. Results of the analysis can be graphical and numerical. An analysis on macro (building) and micro (BIM object) scale is necessary. On macro scale, the complete model is compared to the original point cloud and vice versa to get an overview of the general model quality. The graphical results show occluded zones and non-modeled objects respectively. Colored point clouds are derived from this analysis and integrated in the BIM. On micro scale, the relevant surface parts are extracted per BIM object and compared to the complete point cloud. Occluded zones are extracted based on a maximum deviation. What remains is classified according to the LOA specification. The numerical results are integrated in the BIM with the use of object parameters.

  20. ALGORITHM OF SELECTION EFFECTIVE SOLUTIONS FOR REPROFILING OF INDUSTRIAL BUILDINGS

    Directory of Open Access Journals (Sweden)

    MENEJLJUK A. I.

    2016-08-01

    Full Text Available Raising of problem.Non-compliance requirements of today's industrial enterprises, which were built during the Soviet period, as well as significant technical progress, economic reform and transition to market principles of performance evaluation leading to necessity to change their target and functionality. The technical condition of many industrial buildings in Ukraine allows to exploit them for decades.Redesigning manufacturing enterprises allows not only to reduce the cost of construction, but also to obtain new facilities in the city. Despite the large number of industrial buildings that have lost their effectiveness and relevance, as well as a significant investor interest in these objects, the scope of redevelopment in the construction remains unexplored. Analysis researches on the topic. The problem of reconstruction of industrial buildings considered in Topchy D. [3], Travin V. [9], as well as in the work of other scientists. However, there are no rules in regulatory documents and system studies for improving the organization of the reconstruction of buildings at realigning. The purpose of this work is the development an algorithm of actions for selection of effective organizational decisions at the planning stage of a reprofiling project of industrial buildings. The proposed algorithm allows you to select an effective organizational and technological solution for the re-profiling of industrial buildings, taking into account features of the building, its location, its state of structures and existing restrictions. The most effective organizational solution allows realize the reprofiling project of an industrial building in the most possible short terms and with the lowest possible use of material resources, taking into account the available features and restrictions. Conclusion. Each object has a number of unique features that necessary for considering at choosing an effective reprofiling variant. The developed algorithm for selecting

  1. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  2. Resizing Technique-Based Hybrid Genetic Algorithm for Optimal Drift Design of Multistory Steel Frame Buildings

    Directory of Open Access Journals (Sweden)

    Hyo Seon Park

    2014-01-01

    Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.

  3. Determination of contamination-free build-up for 60Co

    International Nuclear Information System (INIS)

    Higgins, P.D.; Sibata, CH.; Paliwal, B.R.

    1985-01-01

    Experimental verification of the difference between absorbed dose in tissue and the collision fraction of kerma requires precise knowledge of the absorbed dose curve, particularly in the build-up and build-down regions. A simple method from direct measurement of contamination-free build-up for 60 Co, which should also be applicable for most of the photon energies commonly employed for treatment, is presented. It is shown that the contribution from air-scattered electrons to the surface dose may be removed by extrapolating measurements of build-up to zero field size. The remaining contribution to contamination from the collimators and other source-related hardware may be minimised by measuring these build-up curves sufficiently far from the source. These results were tested by measuring the build-up using a magnet to sweep scattered electrons from the primary photon beam and by measuring the surface dose in the limit of an evacuated beam path. The relative dose at zero depth in polystyrene was found to be approximately 8.9+-0.3% of the dose at the depth of maximum build-up. (author)

  4. Matching Aerial Images to 3D Building Models Using Context-Based Geometric Hashing

    Directory of Open Access Journals (Sweden)

    Jaewook Jung

    2016-06-01

    Full Text Available A city is a dynamic entity, which environment is continuously changing over time. Accordingly, its virtual city models also need to be regularly updated to support accurate model-based decisions for various applications, including urban planning, emergency response and autonomous navigation. A concept of continuous city modeling is to progressively reconstruct city models by accommodating their changes recognized in spatio-temporal domain, while preserving unchanged structures. A first critical step for continuous city modeling is to coherently register remotely sensed data taken at different epochs with existing building models. This paper presents a new model-to-image registration method using a context-based geometric hashing (CGH method to align a single image with existing 3D building models. This model-to-image registration process consists of three steps: (1 feature extraction; (2 similarity measure; and matching, and (3 estimating exterior orientation parameters (EOPs of a single image. For feature extraction, we propose two types of matching cues: edged corner features representing the saliency of building corner points with associated edges, and contextual relations among the edged corner features within an individual roof. A set of matched corners are found with given proximity measure through geometric hashing, and optimal matches are then finally determined by maximizing the matching cost encoding contextual similarity between matching candidates. Final matched corners are used for adjusting EOPs of the single airborne image by the least square method based on collinearity equations. The result shows that acceptable accuracy of EOPs of a single image can be achievable using the proposed registration approach as an alternative to a labor-intensive manual registration process.

  5. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  6. Mixing geometric and radiometric features for change classification

    Science.gov (United States)

    Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane

    2008-02-01

    Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.

  7. Fluence map optimization (FMO) with dose–volume constraints in IMRT using the geometric distance sorting method

    International Nuclear Information System (INIS)

    Lan Yihua; Li Cunhua; Ren Haozheng; Zhang Yong; Min Zhifang

    2012-01-01

    the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose–volume constraints. (paper)

  8. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    Science.gov (United States)

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  9. Energy absorption and exposure build-up factors in teeth

    International Nuclear Information System (INIS)

    Manjunatha, H.C.; Rudraswamy, B.

    2010-01-01

    Full text: Gamma and X-radiation are widely used in medical imaging and radiation therapy. The user of radioisotopes must have knowledge about how radiation interacts with matter, especially with the human body, because when photons enter the medium/body, they degrade their energy and build up in the medium, giving rise to secondary radiation which can be estimated by a factor which is called the 'build-up factor'. It is essential to study the exposure build up factor in radiation dosimetry. G.P. fitting method has been used to compute energy absorption and exposure build-up factor of teeth (enamel outer surface (EOS), enamel middle (EM), enamel dentin junction towards enamel (EDJE), enamel dentin junction towards dentin (EDJD), dentin middle (DM) and dentin inner surface (DIS)) for wide energy range (0.015 MeV-15 MeV) up to the penetration depth of 40 mean free path. The dependence of energy absorption and exposure build up factor on incident photon energy, Penetration depth and effective atomic number has also been assessed. The relative dose distribution at a distance r from the point source is also estimated. The computed exposure and absorption build-up factors are useful to estimate the gamma and Bremsstrahlung radiation dose distribution teeth which is useful in clinical dosimetry

  10. Characterizing polycyclic aromatic hydrocarbon build-up processes on urban road surfaces

    International Nuclear Information System (INIS)

    Liu, Liang; Liu, An; Li, Dunzhu; Zhang, Lixun; Guan, Yuntao

    2016-01-01

    Reliable prediction models are essential for modeling pollutant build-up processes on urban road surfaces. Based on successive samplings of road deposited sediments (RDS), this study presents empirical models for mathematical replication of the polycyclic aromatic hydrocarbon (PAH) build-up processes on urban road surfaces. The contaminant build-up behavior was modeled using saturation functions, which are commonly applied in US EPA's Stormwater Management Model (SWMM). Accurate fitting results were achieved in three typical urban land use types, and the applicability of the models was confirmed based on their acceptable relative prediction errors. The fitting results showed high variability in PAH saturation value and build-up rate among different land use types. Results of multivariate data and temporal-based analyses suggested that the quantity and property of RDS significantly influenced PAH build-up. Furthermore, pollution sources, traffic parameters, road surface conditions, and sweeping frequency could synthetically impact the RDS build-up and RDS property change processes. Thus, changes in these parameters could be the main reason for variations in PAH build-up in different urban land use types. - Highlights: • Sufficient robust prediction models were established for analysis of PAH build-up on urban road surfaces. • PAH build-up processes showed high variability among different land use types. • Pollution sources as well as the quantity and property of RDS mainly influenced PAH build-up. - Sufficient robust prediction models were established for analysis of PAH build-up on urban road surfaces. Pollution sources as well as the quantity and property of RDS mainly influenced PAH build-up.

  11. BUILD UP Skills Danmark

    DEFF Research Database (Denmark)

    Forsingdal, Charlotte Vartou; Lauridsen, Vagn Holk; Hougaard, Karsten Frøhlich

    opfyldelsen af 2020-målene, skal de rette kompetencer inden for energief-fektivitet og brug af vedvarende energi være til stede blandt de udførende i bygge- og an-lægsbranchen. Det er på denne baggrund, at Europa-Kommissionen har igangsat Build Up Skills projektet på tværs af Europa. Formålet med denne...

  12. Reconstruction of an InAs nanowire using geometric tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, Stefan; Alpers, Andreas

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  13. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    Science.gov (United States)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  14. A new recursive incremental algorithm for building minimal acyclic deterministic finite automata

    NARCIS (Netherlands)

    Watson, B.W.; Martin-Vide, C.; Mitrana, V.

    2003-01-01

    This chapter presents a new algorithm for incrementally building minimal acyclic deterministic finite automata. Such minimal automata are a compact representation of a finite set of words (e.g. in a spell checker). The incremental aspect of such algorithms (where the intermediate automaton is

  15. Geometric information provider platform

    Directory of Open Access Journals (Sweden)

    Meisam Yousefzadeh

    2015-07-01

    Full Text Available Renovation of existing buildings is known as an essential stage in reduction of the energy loss. Considerable part of renovation process depends on geometric reconstruction of building based on semantic parameters. Following many research projects which were focused on parameterizing the energy usage, various energy modelling methods were developed during the last decade. On the other hand, by developing accurate measuring tools such as laser scanners, the interests of having accurate 3D building models are rapidly growing. But the automation of 3D building generation from laser point cloud or detection of specific objects in that is still a challenge.  The goal is designing a platform through which required geometric information can be efficiently produced to support energy simulation software. Developing a reliable procedure which extracts required information from measured data and delivers them to a standard energy modelling system is the main purpose of the project.

  16. Using neural networks to speed up optimization algorithms

    CERN Document Server

    Bazan, M

    2000-01-01

    The paper presents the application of radial-basis-function (RBF) neural networks to speed up deterministic search algorithms used for the design and optimization of superconducting LHC magnets. The optimization of the iron yoke of the main dipoles requires a number of numerical field computations per trial solution as the field quality depends on the excitation of the magnets. This results in computation times of about 30 minutes for each objective function evaluation (on a DEC-Alpha 600/333) and only the most robust (deterministic) optimization algorithms can be applied. Using a RBF function approximator, the achieved speed-up of the search algorithm is in the order of 25% for problems with two parameters and about 18% for problems with three and five design variables. (13 refs).

  17. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Së ma; Naveed, Hammad; Cao, Youfang; Zhao, Jieling; Liang, Jie

    2015-01-01

    development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating

  18. SU-E-T-59: Calculations of Collimator Scatter Factors (Sc) with and Without Custom-Made Build-Up Caps for CyberKnife

    Energy Technology Data Exchange (ETDEWEB)

    Wokoma, S; Yoon, J; Jung, J [East Carolina University, Greenville, NC (United States); Lee, S [Rhode Island Hospital / Warren Alpert Medical, Providence, RI (United States)

    2014-06-01

    Purpose: To investigate the impact of custom-made build-up caps for a diode detector in robotic radiosurgery radiation fields with variable collimator (IRIS) for collimator scatter factor (Sc) calculation. Methods: An acrylic cap was custom-made to fit our SFD (IBA Dosimetry, Germany) diode detector. The cap has thickness of 5 cm, corresponding to a depth beyond electron contamination. IAEA phase space data was used for beam modeling and DOSRZnrc code was used to model the detector. The detector was positioned at 80 cm source-to-detector distance. Calculations were performed with the SFD, with and without the build-up cap, for clinical IRIS settings ranging from 7.5 to 60 mm. Results: The collimator scatter factors were calculated with and without 5 cm build-up cap. They were agreed within 3% difference except 15 mm cone. The Sc factor for 15 mm cone without buildup was 13.2% lower than that with buildup. Conclusion: Sc data is a critical component in advanced algorithms for treatment planning in order to calculate the dose accurately. After incorporating build-up cap, we discovered differences of up to 13.2 % in Sc factors in the SFD detector, when compared against in-air measurements without build-up caps.

  19. A New Quaternion-Based Kalman Filter for Real-Time Attitude Estimation Using the Two-Step Geometrically-Intuitive Correction Algorithm.

    Science.gov (United States)

    Feng, Kaiqiang; Li, Jie; Zhang, Xiaoming; Shen, Chong; Bi, Yu; Zheng, Tao; Liu, Jun

    2017-09-19

    In order to reduce the computational complexity, and improve the pitch/roll estimation accuracy of the low-cost attitude heading reference system (AHRS) under conditions of magnetic-distortion, a novel linear Kalman filter, suitable for nonlinear attitude estimation, is proposed in this paper. The new algorithm is the combination of two-step geometrically-intuitive correction (TGIC) and the Kalman filter. In the proposed algorithm, the sequential two-step geometrically-intuitive correction scheme is used to make the current estimation of pitch/roll immune to magnetic distortion. Meanwhile, the TGIC produces a computed quaternion input for the Kalman filter, which avoids the linearization error of measurement equations and reduces the computational complexity. Several experiments have been carried out to validate the performance of the filter design. The results demonstrate that the mean time consumption and the root mean square error (RMSE) of pitch/roll estimation under magnetic disturbances are reduced by 45.9% and 33.8%, respectively, when compared with a standard filter. In addition, the proposed filter is applicable for attitude estimation under various dynamic conditions.

  20. An atomistic geometrical model of the B-DNA configuration for DNA-radiation interaction simulations

    Science.gov (United States)

    Bernal, M. A.; Sikansi, D.; Cavalcante, F.; Incerti, S.; Champion, C.; Ivanchenko, V.; Francis, Z.

    2013-12-01

    In this paper, an atomistic geometrical model for the B-DNA configuration is explained. This model accounts for five organization levels of the DNA, up to the 30 nm chromatin fiber. However, fragments of this fiber can be used to construct the whole genome. The algorithm developed in this work is capable to determine which is the closest atom with respect to an arbitrary point in space. It can be used in any application in which a DNA geometrical model is needed, for instance, in investigations related to the effects of ionizing radiations on the human genetic material. Successful consistency checks were carried out to test the proposed model. Catalogue identifier: AEPZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEPZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1245 No. of bytes in distributed program, including test data, etc.: 6574 Distribution format: tar.gz Programming language: FORTRAN. Computer: Any. Operating system: Multi-platform. RAM: 2 Gb Classification: 3. Nature of problem: The Monte Carlo method is used to simulate the interaction of ionizing radiation with the human genetic material in order to determine DNA damage yields per unit absorbed dose. To accomplish this task, an algorithm to determine if a given energy deposition lies within a given target is needed. This target can be an atom or any other structure of the genetic material. Solution method: This is a stand-alone subroutine describing an atomic-resolution geometrical model of the B-DNA configuration. It is able to determine the closest atom to an arbitrary point in space. This model accounts for five organization levels of the human genetic material, from the nucleotide pair up to the 30 nm chromatin fiber. This subroutine carries out a series of coordinate transformations

  1. Sudan-decoding generalized geometric Goppa codes

    DEFF Research Database (Denmark)

    Heydtmann, Agnes Eileen

    2003-01-01

    Generalized geometric Goppa codes are vector spaces of n-tuples with entries from different extension fields of a ground field. They are derived from evaluating functions similar to conventional geometric Goppa codes, but allowing evaluation in places of arbitrary degree. A decoding scheme...... for these codes based on Sudan's improved algorithm is presented and its error-correcting capacity is analyzed. For the implementation of the algorithm it is necessary that the so-called increasing zero bases of certain spaces of functions are available. A method to obtain such bases is developed....

  2. SEMI-AUTOMATIC CO-REGISTRATION OF PHOTOGRAMMETRIC AND LIDAR DATA USING BUILDINGS

    Directory of Open Access Journals (Sweden)

    C. Armenakis

    2012-07-01

    Full Text Available In this work, the co-registration steps between LiDAR and photogrammetric DSM 3Ddata are analyzed and a solution based on automated plane matching is proposed and implemented. For a robust 3D geometric transformation both planes and points are used. Initially planes are chosen as the co-registration primitives. To confine the search space for the plane matching a sequential automatic building matching is performed first. For matching buildings from the LiDAR and the photogrammetric data, a similarity objective function is formed based on the roof height difference (RHD, the 3D histogram of the building attributes, and the building boundary area of a building. A region growing algorithm based on a Triangulated Irregular Network (TIN is implemented to extract planes from both datasets. Next, an automatic successive process for identifying and matching corresponding planes from the two datasets has been developed and implemented. It is based on the building boundary region and determines plane pairs through a robust matching process thus eliminating outlier pairs. The selected correct plane pairs are the input data for the geometric transformation process. The 3D conformal transformation method in conjunction with the attitude quaternion is applied to obtain the transformation parameters using the normal vectors of the corresponding plane pairs. Following the mapping of one dataset onto the coordinate system of the other, the Iterative Closest Point (ICP algorithm is then applied, using the corresponding building point clouds to further refine the transformation solution. The results indicate that the combination of planes and points improve the co-registration outcomes.

  3. Comparing Whole Building Energy Implications of Sidelighting Systems with Alternate Manual Blind Control Algorithms

    Directory of Open Access Journals (Sweden)

    Christopher Dyke

    2015-05-01

    Full Text Available Currently, there is no manual blind control guideline used consistently throughout the energy modeling community. This paper identifies and compares five manual blind control algorithms with unique control patterns and reports blind occlusion, rate of change data, and annual building energy consumption. The blind control schemes detailed here represent five reasonable candidates for use in lighting and energy simulation based on difference driving factors. This study was performed on a medium-sized office building using EnergyPlus with the internal daylight harvesting engine. Results show that applying manual blind control algorithms affects the total annual consumption of the building by as much as 12.5% and 11.5% for interior and exterior blinds respectively, compared to the Always Retracted blinds algorithm. Peak demand was also compared showing blind algorithms affected zone load sizing by as much as 9.8%. The alternate algorithms were tested for their impact on American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE Guideline 14 calibration metrics and all models were found to differ from the original calibrated baseline by more than the recommended ±15% for coefficient of variance of the mean square error (CVRMSE and ±5% for normalized mean bias error (NMBE. The paper recommends that energy modelers use one or more manual blind control algorithms during design stages when making decisions about energy efficiency and other design alternatives.

  4. A sketch to the geometrical N=2-d=5 Yang-Mills theory over a supersymmetric group-manifold - I

    International Nuclear Information System (INIS)

    Borges, M.; Turin Univ.; Pio, G.

    1983-03-01

    This work concerns the search and the construction of a geometrical structure for a supersymmetric N=2-d=5 Yang-Mills theory on the group manifold. From criteria established throughout this paper, we build up an ansatz for the curvatures of our theory and then solve the Bianchi identities, whose solution is fundamental for the construction of the geometrical action. (author)

  5. Metal transfer and build-up in friction and cutting

    CERN Document Server

    Kuznetsov, V D

    1956-01-01

    Metal Transfer and Build-up in Friction and Cutting aims to systematize our knowledge of the metal build-up, to describe some of the investigations past and present carried out in SFTI (Tomsk), and to make an effort to explain a number of the phenomena in cutting, scratching, and sliding from the point of view of metal transfer theory. The book opens with a chapter on the temperature of the rubbing interface of two solids. This temperature is needed in order to elucidate the nature of the formation of a build-up in scratching, cutting, and sliding. Separate chapters follow on the seizure phen

  6. Enhancements to AERMOD's building downwash algorithms based on wind-tunnel and Embedded-LES modeling

    Science.gov (United States)

    Monbureau, E. M.; Heist, D. K.; Perry, S. G.; Brouwer, L. H.; Foroutan, H.; Tang, W.

    2018-04-01

    Knowing the fate of effluent from an industrial stack is important for assessing its impact on human health. AERMOD is one of several Gaussian plume models containing algorithms to evaluate the effect of buildings on the movement of the effluent from a stack. The goal of this study is to improve AERMOD's ability to accurately model important and complex building downwash scenarios by incorporating knowledge gained from a recently completed series of wind tunnel studies and complementary large eddy simulations of flow and dispersion around simple structures for a variety of building dimensions, stack locations, stack heights, and wind angles. This study presents three modifications to the building downwash algorithm in AERMOD that improve the physical basis and internal consistency of the model, and one modification to AERMOD's building pre-processor to better represent elongated buildings in oblique winds. These modifications are demonstrated to improve the ability of AERMOD to model observed ground-level concentrations in the vicinity of a building for the variety of conditions examined in the wind tunnel and numerical studies.

  7. ALGORITHM FOR THE AUTOMATIC ESTIMATION OF AGRICULTURAL TREE GEOMETRIC PARAMETERS USING AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    E. Hadaś

    2016-06-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  8. Build-up and management of transuranium

    International Nuclear Information System (INIS)

    Uematsu, Kunihiko

    1984-01-01

    About 17,000,000 kW is generated by nuclear power station at present and this figure correspond to 20 % of total power generation in Japan, and is expected to increase year after year. Following the increase of power generation, build-up of Transuraium from nuclear power station will increase as a matter of course. In 2,000 AD; the build-up of Pu and TPu is expected to reach up to 200 T(TPu = 24 T). Effective management of TPu build-up is now an urgent problem Recycling of Pu and TPu including LWR-Pu recycling, ATR-Pu recycling and FBR-Pu recycling were investigated. In LWR-Pu recycling, recycling quantities of Pu and TPu, and generation of power increase following the repetition of recycling. In ATR-Pu recycling, the increase of TPu following recycling is more remakable than that of LWR-Pu recycling. On the contrary, in FBR-Pu recycling, TPu decreases following the repetition of recycling. The decrease of TPu is thought to be caused by extinction effect in FBR. All of these recycling are suitable for the utilization of Pu, but FBR-Pu recycling is most effective for utilization of Pu and decrease of TPu. Accordingly, when LWR or ATR recycling is applied, Pu shall be transferred to FBR after 1 - 2 recycling. For long-term management of TPu, recycling is not sufficient and some positive method such as oxtinction by strong neutron source like proton linear accelerator is necessary. Fundamental researches on nuclear fuel cycle, nuclide separation method and extinction process of TPu must be carried out. (Ishimitsu, A.)

  9. Geometric Algorithms for Part Orienting and Probing

    NARCIS (Netherlands)

    Panahi, F.

    2015-01-01

    In this thesis, detailed solutions are presented to several problems dealing with geometric shape and orientation of an object in the field of robotics and automation. We first have considered a general model for shape variations that allows variation along the entire boundary of an object, both in

  10. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    International Nuclear Information System (INIS)

    Pennington, R S; Boothroyd, C B; König, S; Alpers, A; Dunin-Borkowski, R E

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality. When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize.

  11. Reconstruction of an InAs nanowire using geometric and algebraic tomography

    DEFF Research Database (Denmark)

    Pennington, Robert S.; König, S.; Alpers, A.

    2011-01-01

    Geometric tomography and conventional algebraic tomography algorithms are used to reconstruct cross-sections of an InAs nanowire from a tilt series of experimental annular dark-field images. Both algorithms are also applied to a test object to assess what factors affect the reconstruction quality....... When using the present algorithms, geometric tomography is faster, but artifacts in the reconstruction may be difficult to recognize....

  12. Geometric differential evolution for combinatorial and programs spaces.

    Science.gov (United States)

    Moraglio, A; Togelius, J; Silva, S

    2013-01-01

    Geometric differential evolution (GDE) is a recently introduced formal generalization of traditional differential evolution (DE) that can be used to derive specific differential evolution algorithms for both continuous and combinatorial spaces retaining the same geometric interpretation of the dynamics of the DE search across representations. In this article, we first review the theory behind the GDE algorithm, then, we use this framework to formally derive specific GDE for search spaces associated with binary strings, permutations, vectors of permutations and genetic programs. The resulting algorithms are representation-specific differential evolution algorithms searching the target spaces by acting directly on their underlying representations. We present experimental results for each of the new algorithms on a number of well-known problems comprising NK-landscapes, TSP, and Sudoku, for binary strings, permutations, and vectors of permutations. We also present results for the regression, artificial ant, parity, and multiplexer problems within the genetic programming domain. Experiments show that overall the new DE algorithms are competitive with well-tuned standard search algorithms.

  13. Surface and contour-preserving origamic architecture paper pop-ups.

    Science.gov (United States)

    Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim

    2014-02-01

    Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces, and volume. The designs have also been shown to more closely resemble those created by real artists.

  14. Surface- and Contour-Preserving Origamic Architecture Paper Pop-Ups.

    Science.gov (United States)

    Le, Sang N; Leow, Su-Jun; Le-Nguyen, Tuong-Vu; Ruiz, Conrado; Low, Kok-Lim

    2013-08-02

    Origamic architecture (OA) is a form of papercraft that involves cutting and folding a single sheet of paper to produce a 3D pop-up, and is commonly used to depict architectural structures. Because of the strict geometric and physical constraints, OA design requires considerable skill and effort. In this paper, we present a method to automatically generate an OA design that closely depicts an input 3D model. Our algorithm is guided by a novel set of geometric conditions to guarantee the foldability and stability of the generated pop-ups. The generality of the conditions allows our algorithm to generate valid pop-up structures that are previously not accounted for by other algorithms. Our method takes a novel image-domain approach to convert the input model to an OA design. It performs surface segmentation of the input model in the image domain, and carefully represents each surface with a set of parallel patches. Patches are then modified to make the entire structure foldable and stable. Visual and quantitative comparisons of results have shown our algorithm to be significantly better than the existing methods in the preservation of contours, surfaces and volume. The designs have also been shown to more closely resemble those created by real artists.

  15. Global optimization framework for solar building design

    Science.gov (United States)

    Silva, N.; Alves, N.; Pascoal-Faria, P.

    2017-07-01

    The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.

  16. The build-up and characterization of nuclear burn-up wave in a fast ...

    Indian Academy of Sciences (India)

    K V Anoop

    2018-02-07

    Feb 7, 2018 ... evaluating the quality of the wave by the researchers working in the field of nuclear burn-up wave build-up and propagation. Keywords. ... However, there are concerns relating to the nuclear safety, ... Simulation studies have.

  17. A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

    Directory of Open Access Journals (Sweden)

    Golutvin I.

    2016-01-01

    Full Text Available A new segment building algorithm for the Cathode Strip Chambers in the CMS experiment is presented. A detailed description of the new algorithm is given along with a comparison with the algorithm used in the CMS software. The new segment builder was tested with different Monte-Carlo data samples. The new algorithm is meant to be robust and effective for hard muons and the higher luminosity that is expected in the future at the LHC.

  18. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  19. Efforts to control radiation build-up in Ringhals

    Energy Technology Data Exchange (ETDEWEB)

    Egner, K.; Aronsson, P.O.; Erixon, O. [Vattenfall AB, Vaeroebacka (Sweden)

    1995-03-01

    It is well known that good control of the primary chemistry in a PWR is essential in order to minimize material problems and fuel damages. It has also been well established that the water chemistry has a great influence on accumulation of corrosion products on the fuel and the radiation build-up on primary system surfaces. Ringhals was one of the pioneers to increase operating pH in order to reduce radiation build-up and has now been operating for ten years with pH at 7.4 or (in later years) 7.2. Our experience is favourable and includes low radiation levels in the new (1989) steam generators of Ringhals 2. Ringhals 4 has operated almost its whole life at pH 7.2 or higher and it remains one of the cleanest PWRs of its vintage. In addition to strict adherence to a stable operating chemistry, Ringhals is now working on a program with the aim to find optimum shut-down and start-up chemistry to reduce activity levels in the primary systems. A particular goal is to use the shut-down and start-up chemistry at the 1994 outage in Ringhals 3 in order to reduce doserates in preparation for the planned steam generator replacement in 1995. The paper summarizes the experience to date of the established operating chemistry, on-going tests with modified shut-down and start-up chemistry and other measures to limit or reduce the activity build-up.

  20. Efforts to control radiation build-up in Ringhals

    International Nuclear Information System (INIS)

    Egner, K.; Aronsson, P.O.; Erixon, O.

    1995-01-01

    It is well known that good control of the primary chemistry in a PWR is essential in order to minimize material problems and fuel damages. It has also been well established that the water chemistry has a great influence on accumulation of corrosion products on the fuel and the radiation build-up on primary system surfaces. Ringhals was one of the pioneers to increase operating pH in order to reduce radiation build-up and has now been operating for ten years with pH at 7.4 or (in later years) 7.2. Our experience is favourable and includes low radiation levels in the new (1989) steam generators of Ringhals 2. Ringhals 4 has operated almost its whole life at pH 7.2 or higher and it remains one of the cleanest PWRs of its vintage. In addition to strict adherence to a stable operating chemistry, Ringhals is now working on a program with the aim to find optimum shut-down and start-up chemistry to reduce activity levels in the primary systems. A particular goal is to use the shut-down and start-up chemistry at the 1994 outage in Ringhals 3 in order to reduce doserates in preparation for the planned steam generator replacement in 1995. The paper summarizes the experience to date of the established operating chemistry, on-going tests with modified shut-down and start-up chemistry and other measures to limit or reduce the activity build-up

  1. Build Up Your Bones! | NIH MedlinePlus the Magazine

    Science.gov (United States)

    ... turn Javascript on. Feature: Osteoporosis Build Up Your Bones! Past Issues / Winter 2011 Table of Contents Exercise ... who have been diagnosed with osteoporosis. The Best Bone-Building Exercise The best exercise for your bones ...

  2. Genetic algorithm optimization of atomic clusters

    International Nuclear Information System (INIS)

    Morris, J.R.; Deaven, D.M.; Ho, K.M.; Wang, C.Z.; Pan, B.C.; Wacker, J.G.; Turner, D.E.; Iowa State Univ., Ames, IA

    1996-01-01

    The authors have been using genetic algorithms to study the structures of atomic clusters and related problems. This is a problem where local minima are easy to locate, but barriers between the many minima are large, and the number of minima prohibit a systematic search. They use a novel mating algorithm that preserves some of the geometrical relationship between atoms, in order to ensure that the resultant structures are likely to inherit the best features of the parent clusters. Using this approach, they have been able to find lower energy structures than had been previously obtained. Most recently, they have been able to turn around the building block idea, using optimized structures from the GA to learn about systematic structural trends. They believe that an effective GA can help provide such heuristic information, and (conversely) that such information can be introduced back into the algorithm to assist in the search process

  3. Geometric reconstruction methods for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)

    2013-05-15

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.

  4. Geometric reconstruction methods for electron tomography

    International Nuclear Information System (INIS)

    Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees

    2013-01-01

    Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand

  5. Tool coupling for the design and operation of building energy and control systems based on the Functional Mock-up Interface standard

    Energy Technology Data Exchange (ETDEWEB)

    Nouidui, Thierry Stephane; Wetter, Michael

    2014-03-01

    This paper describes software tools developed at the Lawrence Berkeley National Laboratory (LBNL) that can be coupled through the Functional Mock-up Interface standard in support of the design and operation of building energy and control systems. These tools have been developed to address the gaps and limitations encountered in legacy simulation tools. These tools were originally designed for the analysis of individual domains of buildings, and have been difficult to integrate with other tools for runtime data exchange. The coupling has been realized by use of the Functional Mock-up Interface for co-simulation, which standardizes an application programming interface for simulator interoperability that has been adopted in a variety of industrial domains. As a variety of coupling scenarios are possible, this paper provides users with guidance on what coupling may be best suited for their application. Furthermore, the paper illustrates how tools can be integrated into a building management system to support the operation of buildings. These tools may be a design model that is used for real-time performance monitoring, a fault detection and diagnostics algorithm, or a control sequence, each of which may be exported as a Functional Mock-up Unit and made available in a building management system as an input/output block. We anticipate that this capability can contribute to bridging the observed performance gap between design and operational energy use of buildings.

  6. RECONSTRUCTION OF 3D VECTOR MODELS OF BUILDINGS BY COMBINATION OF ALS, TLS AND VLS DATA

    Directory of Open Access Journals (Sweden)

    H. Boulaassal

    2012-09-01

    Full Text Available Airborne Laser Scanning (ALS, Terrestrial Laser Scanning (TLS and Vehicle based Laser Scanning (VLS are widely used as data acquisition methods for 3D building modelling. ALS data is often used to generate, among others, roof models. TLS data has proven its effectiveness in the geometric reconstruction of building façades. Although the operating algorithms used in the processing chain of these two kinds of data are quite similar, their combination should be more investigated. This study explores the possibility of combining ALS and TLS data for simultaneously producing 3D building models from bird point of view and pedestrian point of view. The geometric accuracy of roofs and façades models is different due to the acquisition techniques. In order to take these differences into account, the surfaces composing roofs and façades are extracted with the same algorithm of segmentation. Nevertheless the segmentation algorithm must be adapted to the properties of the different point clouds. It is based on the RANSAC algorithm, but has been applied in a sequential way in order to extract all potential planar clusters from airborne and terrestrial datasets. Surfaces are fitted to planar clusters, allowing edge detection and reconstruction of vector polygons. Models resulting from TLS data are obviously more accurate than those generated from ALS data. Therefore, the geometry of the roofs is corrected and adapted according to the geometry of the corresponding façades. Finally, the effects of the differences between raw ALS and TLS data on the results of the modeling process are analyzed. It is shown that such combination could be used to produce reliable 3D building models.

  7. Extraction and Simplification of Building Façade Pieces from Mobile Laser Scanner Point Clouds for 3D Street View Services

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available Extraction and analysis of building façades are key processes in the three-dimensional (3D building reconstruction and realistic geometrical modeling of the urban environment, which includes many applications, such as smart city management, autonomous navigation through the urban environment, fly-through rendering, 3D street view, virtual tourism, urban mission planning, etc. This paper proposes a building facade pieces extraction and simplification algorithm based on morphological filtering with point clouds obtained by a mobile laser scanner (MLS. First, this study presents a point cloud projection algorithm with high-accuracy orientation parameters from the position and orientation system (POS of MLS that can convert large volumes of point cloud data to a raster image. Second, this study proposes a feature extraction approach based on morphological filtering with point cloud projection that can obtain building facade features in an image space. Third, this study designs an inverse transformation of point cloud projection to convert building facade features from an image space to a 3D space. A building facade feature with restricted facade plane detection algorithm is implemented to reconstruct façade pieces for street view service. The results of building facade extraction experiments with large volumes of point cloud from MLS show that the proposed approach is suitable for various types of building facade extraction. The geometric accuracy of building façades is 0.66 m in x direction, 0.64 in y direction and 0.55 m in the vertical direction, which is the same level as the space resolution (0.5 m of the point cloud.

  8. Distributed DC-UPS for energy smart buildings

    Energy Technology Data Exchange (ETDEWEB)

    Moreno-Munoz, A.; Pallares-Lopez, V.; Real-Calvo, R.J.; Gil-de-Castro, A. [Universidad de Cordoba, Area de Electronica, Dpto. Arquitectura de Computadores, Electronica y Tecnologia Electronica, Escuela Politecnica Superior, Campus de Rabanales, E-14071 Cordoba (Spain); De la Rosa, Juan Jose Gonzalez [Universidad de Cadiz, Area de Electronica, Dpto. ISA, TE y Electronica, Escuela Politecnica Superior Avda, Ramon Puyol, S/N, E-11202 Algeciras-Cadiz (Spain)

    2011-01-15

    Energy efficiency (EE) improvement is one of the most important targets to be achieved on every society as a whole and in buildings in particular. Energy Smart Building aims to accelerate the uptake of EE, healthy buildings that by integrating smart technology and solutions consume radically little resources while enhancing the quality of life. This paper addresses how uninterruptible power supply (UPS), particularly when configured in distributed DC mode, can become an Energy Efficient (EE) solution in high tech buildings, especially when integrated with complimentary Power Quality (PQ) measures. The paper is based upon PQ audits conducted at different IT-intensive modern building. Some of the mayor objectives of the PQ studies were: detecting the main involved disturbances by PQ monitoring, identifying the power disturbances root causes, characterizing the electromagnetic compatibility level of equipments and installation and providing guidelines for implementing energy-efficiency solutions. It was found that the main problems for the equipment installed were harmonics and voltage sag (dip). Finally, this paper demonstrates the impacts of generalized electronic devices on the PQ of the buildings and the implications on energy uses. (author)

  9. Evaluation of Building Energy Saving Through the Development of Venetian Blinds' Optimal Control Algorithm According to the Orientation and Window-to-Wall Ratio

    Science.gov (United States)

    Kwon, Hyuk Ju; Yeon, Sang Hun; Lee, Keum Ho; Lee, Kwang Ho

    2018-02-01

    As various studies focusing on building energy saving have been continuously conducted, studies utilizing renewable energy sources, instead of fossil fuel, are needed. In particular, studies regarding solar energy are being carried out in the field of building science; in order to utilize such solar energy effectively, solar radiation being brought into the indoors should be acquired and blocked properly. Blinds are a typical solar radiation control device that is capable of controlling indoor thermal and light environments. However, slat-type blinds are manually controlled, giving a negative effect on building energy saving. In this regard, studies regarding the automatic control of slat-type blinds have been carried out for the last couple of decades. Therefore, this study aims to provide preliminary data for optimal control research through the controlling of slat angle in slat-type blinds by comprehensively considering various input variables. The window area ratio and orientation were selected as input variables. It was found that an optimal control algorithm was different among each window-to-wall ratio and window orientation. In addition, through comparing and analyzing the building energy saving performance for each condition by applying the developed algorithms to simulations, up to 20.7 % energy saving was shown in the cooling period and up to 12.3 % energy saving was shown in the heating period. In addition, building energy saving effect was greater as the window area ratio increased given the same orientation, and the effects of window-to-wall ratio in the cooling period were higher than those of window-to-wall ratio in the heating period.

  10. Electron-Cloud Build-Up: Summary

    International Nuclear Information System (INIS)

    Furman, M.A.

    2007-01-01

    I present a summary of topics relevant to the electron-cloud build-up and dissipation that were presented at the International Workshop on Electron-Cloud Effects 'ECLOUD 07' (Daegu, S. Korea, April 9-12, 2007). This summary is not meant to be a comprehensive review of the talks. Rather, I focus on those developments that I found, in my personal opinion, especially interesting. The contributions, all excellent, are posted in http://chep.knu.ac.kr/ecloud07/

  11. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Aarle, Wim van, E-mail: wim.vanaarle@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, Willem Jan, E-mail: willemjan.palenstijn@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); De Beenhouwer, Jan, E-mail: jan.debeenhouwer@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Altantzis, Thomas, E-mail: thomas.altantzis@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Bals, Sara, E-mail: sara.bals@uantwerpen.be [Electron Microscopy for Materials Science, University of Antwerp, Groenenborgerlaan 171, B-2020 Wilrijk (Belgium); Batenburg, K. Joost, E-mail: joost.batenburg@cwi.nl [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde & Informatica, Science Park 123, NL-1098 XG Amsterdam (Netherlands); Mathematical Institute, Leiden University, P.O. Box 9512, NL-2300 RA Leiden (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-10-15

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series.

  12. The ASTRA Toolbox: A platform for advanced algorithm development in electron tomography

    International Nuclear Information System (INIS)

    Aarle, Wim van; Palenstijn, Willem Jan; De Beenhouwer, Jan; Altantzis, Thomas; Bals, Sara; Batenburg, K. Joost; Sijbers, Jan

    2015-01-01

    We present the ASTRA Toolbox as an open platform for 3D image reconstruction in tomography. Most of the software tools that are currently used in electron tomography offer limited flexibility with respect to the geometrical parameters of the acquisition model and the algorithms used for reconstruction. The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing these limitations. We demonstrate this flexibility, the resulting reconstruction quality, and the computational efficiency of this toolbox by a series of experiments, based on experimental dual-axis tilt series. - Highlights: • The ASTRA Toolbox is an open platform for 3D image reconstruction in tomography. • Advanced reconstruction algorithms can be prototyped using the fast and flexible building blocks. • This flexibility is demonstrated on a common use case: dual-axis tilt series reconstruction with prior knowledge. • The computational efficiency is validated on an experimentally measured tilt series

  13. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul

    2013-01-01

    We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...... scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies...

  14. Technologies and Algorithms for Building the Augmented Reality

    Directory of Open Access Journals (Sweden)

    I. A. Blagoveshchenskiy

    2013-01-01

    Full Text Available This article is about the Augmented Reality technology itself and its current implementations. In the first part of the article the authors give a short historical reference to the origins of the name ”augmented reality”, by whom it was introduced and what it means. Later in the article two major approaches to building AR are described. The first one is based on the usage of a marker, and the second one is marker-free. The first approach is examined in detail. In order to analyze video stream and recognize known objects in it, algorithms of the Computer Vision are used. The authors give a short description and the main characteristics only of two of them: genetic algorithms and feature detection & description. For a programmatic implementation of those algorithms one can use special libraries like OpenCV and AForge.NET, also mentioned in the article. Both of them give vast functional capabilities in image processing and object recognition. At the end of the article is given an example of creating AR using the OpenCV library. Main attention is payed to the problem of making projection of a 3D model on the marker’s plane. This example can be used as the foundation for a custom AR framework.

  15. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    International Nuclear Information System (INIS)

    Slattery, S. R.; Wilson, P. P. H.; Pawlowski, R. P.

    2013-01-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  16. The Data Transfer Kit: A geometric rendezvous-based tool for multiphysics data transfer

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, S. R.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Pawlowski, R. P. [Sandia National Laboratories, P.O. Box 5800, Albuquerque, NM 87185 (United States)

    2013-07-01

    The Data Transfer Kit (DTK) is a software library designed to provide parallel data transfer services for arbitrary physics components based on the concept of geometric rendezvous. The rendezvous algorithm provides a means to geometrically correlate two geometric domains that may be arbitrarily decomposed in a parallel simulation. By repartitioning both domains such that they have the same geometric domain on each parallel process, efficient and load balanced search operations and data transfer can be performed at a desirable algorithmic time complexity with low communication overhead relative to other types of mapping algorithms. With the increased development efforts in multiphysics simulation and other multiple mesh and geometry problems, generating parallel topology maps for transferring fields and other data between geometric domains is a common operation. The algorithms used to generate parallel topology maps based on the concept of geometric rendezvous as implemented in DTK are described with an example using a conjugate heat transfer calculation and thermal coupling with a neutronics code. In addition, we provide the results of initial scaling studies performed on the Jaguar Cray XK6 system at Oak Ridge National Laboratory for a worse-case-scenario problem in terms of algorithmic complexity that shows good scaling on 0(1 x 104) cores for topology map generation and excellent scaling on 0(1 x 105) cores for the data transfer operation with meshes of O(1 x 109) elements. (authors)

  17. Geometrical methods in learning theory

    International Nuclear Information System (INIS)

    Burdet, G.; Combe, Ph.; Nencka, H.

    2001-01-01

    The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine

  18. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  19. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  20. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)

    2016-06-15

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  1. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    International Nuclear Information System (INIS)

    Woods, K; DiCostanzo, D; Gupta, N

    2016-01-01

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  2. Geometrical determinations of IMRT photon pencil-beam path in radiotherapy wedges and limit divergence angle with the Anisotropic Analytic Algorithm (AAA

    Directory of Open Access Journals (Sweden)

    Francisco Casesnoves

    2014-08-01

    Full Text Available Purpose: Static wedge filters (WF are commonly used in radiation therapy, forward and/or inverse planning. We calculated the exact 2D/3D geometrical pathway of the photon-beam through the usual alloy WF, in order to get a better dose related to the beam intensity attenuation factor(s, after the beam has passed through the WF. The objective was to provide general formulation into the Anisotropic Analytical Algorithm (AAA model coordinates system (depending on collimator/wedge angles that also can be applied to other models. Additionally, second purpose of this study was to develop integral formulation for 3D wedge exponential factor with statistical approximations, with introduction for the limit angle/conformal wedge.Methods: The radiotherapy model used to develop this mathematical task is the classical superposition-convolution algorithm, AAA (developed by Ulmer and Harder. We worked with optimal geometrical approximations to make the computational IMRT calculations quicker/reduce the planning-system time. Analytic geometry/computational-techniques to carry out simulations (for standard wedges are detailed/developed sharply. Integral developments/integral-statistical approximations are explained. Beam-divergence limit Angle for optimal wedge filtration formulas is calculated/sketched, with geometrical approximations. Fundamental trigonometry is used for this purpose.Results: Extent simulation tables for WF of 15º, 30º, 45º, and 60º are shown with errors. As a result, it is possible to determine the best individual treatment dose distribution for each patient. We presented these basic simulations/numerical examples for standard manufacturing WF of straight sloping surface, to check the accuracy/errors of the calculations. Simulations results give low RMS/Relative Error values (formulated for WF of 15º, 30º, 45º, and 60º.Conclusion: We obtained a series of formulas of analytic geometry for WF that can be applied for any particular dose

  3. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    Science.gov (United States)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  4. A performance comparison of multi-objective optimization algorithms for solving nearly-zero-energy-building design problems

    NARCIS (Netherlands)

    Hamdy, M.; Nguyen, A.T. (Anh Tuan); Hensen, J.L.M.

    2016-01-01

    Integrated building design is inherently a multi-objective optimization problem where two or more conflicting objectives must be minimized and/or maximized concurrently. Many multi-objective optimization algorithms have been developed; however few of them are tested in solving building design

  5. Height and Tilt Geometric Texture

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Desbrun, Mathieu; Bærentzen, Jakob Andreas

    2009-01-01

    compromise between functionality and simplicity: it can efficiently handle and process geometric texture too complex to be represented as a height field, without having recourse to full blown mesh editing algorithms. The height-and-tilt representation proposed here is fully intrinsic to the mesh, making...

  6. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  7. Image understanding using geometric context

    Science.gov (United States)

    Zhang, Xiaochun; Liu, Chuancai

    2017-07-01

    A Gibbs Sampler based topic model for image annotation, which takes into account the interaction between visual geometric context and related topic, is presented. Most of the existing topic models for scene annotation use segmentation-based algorithm. However, topic models using segmentation algorithm alone sometimes can produce erroneous results when used to annotate real-life scene pictures. Therefore, our algorithm makes use of peaks of image surface instead of segmentation regions. Existing approaches use SIFT algorithm and treat the peaks as round blob features. In this paper, the peaks are treated as anisotropic blob features, which models low level visual elements more precisely. In order to better utilize visual features, our model not only takes into consideration visual codeword, but also considers influence of visual properties to topic formation, such as orientation, width, length and color. The basic idea is based on the assumption that different topics will produce distinct visual appearance, and different visual appearance is helpful to distinguish topics. During the learning stage, each topic will be associated with a set of distributions of visual properties, which depicts appearance of the topic. This paper considers more geometric properties, which will reduce topic uncertainty and learn the images better. Tested with Corel5K, SAIAPR-TC12 and Espgame100k Datasets, our method performs moderately better than some state of the arts methods.

  8. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  9. A model for the build-up of disordered material in ion bombarded Si

    International Nuclear Information System (INIS)

    Nelson, R.S.

    1977-01-01

    A new model based on experimental observation is developed for the build-up of disordered material in ion bombarded silicon. The model assumes that disordered zones are created in a background of migrating point defects, these zones then act as neutral sinks for such defects which interact with the zones and cause recrystallization. A simple steady state rate theory is developed to describe the build-up of disordered material with ion dose as a function of temperature. In general the theory predicts two distinct behaviour patterns depending on the temperature and the ion mass, namely a linear build-up with dose to complete disorder for heavy bombarding ions and a build-up to saturation at a relatively low level for light ions such as protons. However, in some special circumstances a transition region is predicted where the build-up of disorder approximately follows a (dose)sup(1/2) relationship before reverting to a linear behaviour at high dose. (author)

  10. The Effect of Sloshing on a Tank Pressure Build-up Unit

    OpenAIRE

    Banne, Håvard Bolstad

    2017-01-01

    This thesis work has aimed to identify how sloshing will affect a liquefied natural gas (LNG) fuel tank. The physical nature of LNG means it needs to be kept cooled and pressurized in order to remain in a liquid state. By implementing a pressure build-up unit (PBU) it is possible to pressurize the tank vaporizing the tank’s contents, for the vapour then to return to tank in a loop, building pressure in the process. A tank pressure build-up unit has been built in the laboratory ...

  11. Energy efficient model based algorithm for control of building HVAC systems.

    Science.gov (United States)

    Kirubakaran, V; Sahu, Chinmay; Radhakrishnan, T K; Sivakumaran, N

    2015-11-01

    Energy efficient designs are receiving increasing attention in various fields of engineering. Heating ventilation and air conditioning (HVAC) control system designs involve improved energy usage with an acceptable relaxation in thermal comfort. In this paper, real time data from a building HVAC system provided by BuildingLAB is considered. A resistor-capacitor (RC) framework for representing thermal dynamics of the building is estimated using particle swarm optimization (PSO) algorithm. With objective costs as thermal comfort (deviation of room temperature from required temperature) and energy measure (Ecm) explicit MPC design for this building model is executed based on its state space representation of the supply water temperature (input)/room temperature (output) dynamics. The controllers are subjected to servo tracking and external disturbance (ambient temperature) is provided from the real time data during closed loop control. The control strategies are ported on a PIC32mx series microcontroller platform. The building model is implemented in MATLAB and hardware in loop (HIL) testing of the strategies is executed over a USB port. Results indicate that compared to traditional proportional integral (PI) controllers, the explicit MPC's improve both energy efficiency and thermal comfort significantly. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Algorithms and file structures for computational geometry

    International Nuclear Information System (INIS)

    Hinrichs, K.; Nievergelt, J.

    1983-01-01

    Algorithms for solving geometric problems and file structures for storing large amounts of geometric data are of increasing importance in computer graphics and computer-aided design. As examples of recent progress in computational geometry, we explain plane-sweep algorithms, which solve various topological and geometric problems efficiently; and we present the grid file, an adaptable, symmetric multi-key file structure that provides efficient access to multi-dimensional data along any space dimension. (orig.)

  13. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  14. Multiobjective optimization of building design using genetic algorithm and artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Magnier, L.; Zhou, L.; Haghighat, F. [Concordia Univ., Centre for Building Studies, Montreal, PQ (Canada). Dept. of Building, Civil and Environmental Engineering

    2008-07-01

    This paper addressed the challenge of designing modern buildings that are energy efficient, affordable, environmentally sound and comfortable for occupants. Building optimization is a time consuming process when so many objectives must be met. In particular, the use of genetic algorithm (GA) for building design has limitations due to the high number of simulations required. This paper presented an efficient approach to overcome the limitations of GA for building design. The approach expanded the GA methodology to multiobjective optimization. The GA integrating neural network (GAINN) approach first uses a simulation-based artificial neural network (ANN) to characterize building behaviour, and then combines it with a GA for optimization. The process was shown to provide fast and reliable optimization. GAINN was further improved by integrating multiobjective evolutionary algorithms (MOEAs). Two new MOEAs named NSGAINN and PLAGUE were designed for the proposed methodology. The purpose of creating a new MOEA was to take advantage of GAINN fast evaluations. This paper presented bench test results and compared them with with NSGA-2. A previous case study using GAINN methodology was re-optimized with the newly developed MOEA. The design to be optimized was a ventilation system of a standard office room in the summer, with 2 occupants and 4 underfloor air distribution diffusers. The objectives included thermal comfort, indoor air quality, and energy conservation for cooling. The control variables were temperature of the air supply, speed of air supply, distance from the diffuser to the occupant, and the distance from the return grill to the contaminant source. The results showed that the newly presented GAINN methodology was better in both convergence and range of choices compared to a weighted sum GA. 13 refs., 2 tabs., 9 figs.

  15. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  16. Using geometrical, textural, and contextual information of land parcels for classification of detailed urban land use

    Science.gov (United States)

    Wu, S.-S.; Qiu, X.; Usery, E.L.; Wang, L.

    2009-01-01

    Detailed urban land use data are important to government officials, researchers, and businesspeople for a variety of purposes. This article presents an approach to classifying detailed urban land use based on geometrical, textural, and contextual information of land parcels. An area of 6 by 14 km in Austin, Texas, with land parcel boundaries delineated by the Travis Central Appraisal District of Travis County, Texas, is tested for the approach. We derive fifty parcel attributes from relevant geographic information system (GIS) and remote sensing data and use them to discriminate among nine urban land uses: single family, multifamily, commercial, office, industrial, civic, open space, transportation, and undeveloped. Half of the 33,025 parcels in the study area are used as training data for land use classification and the other half are used as testing data for accuracy assessment. The best result with a decision tree classification algorithm has an overall accuracy of 96 percent and a kappa coefficient of 0.78, and two naive, baseline models based on the majority rule and the spatial autocorrelation rule have overall accuracy of 89 percent and 79 percent, respectively. The algorithm is relatively good at classifying single-family, multifamily, commercial, open space, and undeveloped land uses and relatively poor at classifying office, industrial, civic, and transportation land uses. The most important attributes for land use classification are the geometrical attributes, particularly those related to building areas. Next are the contextual attributes, particularly those relevant to the spatial relationship between buildings, then the textural attributes, particularly the semivariance texture statistic from 0.61-m resolution images.

  17. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  18. AAA and PBC calculation accuracy in the surface build-up region in tangential beam treatments. Phantom and breast case study with the Monte Carlo code PENELOPE

    International Nuclear Information System (INIS)

    Panettieri, Vanessa; Barsoum, Pierre; Westermark, Mathias; Brualla, Lorenzo; Lax, Ingmar

    2009-01-01

    Background and purpose: In tangential beam treatments accurate dose calculation of the absorbed dose in the build-up region is of major importance, in particular when the target has superficial extension close to the skin. In most analytical treatment planning systems (TPSs) calculations depend on the experimental measurements introduced by the user in which accuracy might be limited by the type of detector employed to perform them. To quantify the discrepancy between analytically calculated and delivered dose in the build-up region, near the skin of a patient, independent Monte Carlo (MC) simulations using the PENELOPE code were performed. Dose distributions obtained with MC simulations were compared with those given by the Pencil Beam Convolution (PBC) algorithm and the Analytical Anisotropic Algorithm (AAA) implemented in the commercial TPS Eclipse. Material and methods: A cylindrical phantom was used to approximate the breast contour of a patient for MC simulations and the TPS. Calculations of the absorbed doses were performed for 6 and 18 MV beams for four different angles of incidence: 15 deg., 30 deg., 45 deg. and 75 deg. and different field sizes: 3 x 3 cm 2 , 10 x 10 cm 2 and 40 x 40 cm 2 . Absorbed doses along the phantom central axis were obtained with both the PBC algorithm and the AAA and compared to those estimated by the MC simulations. Additionally, a breast patient case was calculated with two opposed 6 MV photon beams using all the aforementioned analytical and stochastic algorithms. Results: For the 6 MV photon beam in the phantom case, both the PBC algorithm and the AAA tend to underestimate the absorbed dose in the build-up region in comparison to MC results. These differences are clinically irrelevant and are included in a 1 mm range. This tendency is also confirmed in the breast patient case. For the 18 MV beam the PBC algorithm underestimates the absorbed dose with respect to the AAA. In comparison to MC simulations the PBC algorithm tends

  19. A differential-geometric approach to generalized linear models with grouped predictors

    NARCIS (Netherlands)

    Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.

    We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important

  20. Gems of combinatorial optimization and graph algorithms

    CERN Document Server

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  1. Building optimal regression tree by ant colony system-genetic algorithm: Application to modeling of melting points

    Energy Technology Data Exchange (ETDEWEB)

    Hemmateenejad, Bahram, E-mail: hemmatb@sums.ac.ir [Department of Chemistry, Shiraz University, Shiraz (Iran, Islamic Republic of); Medicinal and Natural Products Chemistry Research Center, Shiraz University of Medical Sciences, Shiraz (Iran, Islamic Republic of); Shamsipur, Mojtaba [Department of Chemistry, Razi University, Kermanshah (Iran, Islamic Republic of); Zare-Shahabadi, Vali [Young Researchers Club, Mahshahr Branch, Islamic Azad University, Mahshahr (Iran, Islamic Republic of); Akhond, Morteza [Department of Chemistry, Shiraz University, Shiraz (Iran, Islamic Republic of)

    2011-10-17

    Highlights: {yields} Ant colony systems help to build optimum classification and regression trees. {yields} Using of genetic algorithm operators in ant colony systems resulted in more appropriate models. {yields} Variable selection in each terminal node of the tree gives promising results. {yields} CART-ACS-GA could model the melting point of organic materials with prediction errors lower than previous models. - Abstract: The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure.

  2. Stock price prediction using geometric Brownian motion

    Science.gov (United States)

    Farida Agustini, W.; Restu Affianti, Ika; Putri, Endah RM

    2018-03-01

    Geometric Brownian motion is a mathematical model for predicting the future price of stock. The phase that done before stock price prediction is determine stock expected price formulation and determine the confidence level of 95%. On stock price prediction using geometric Brownian Motion model, the algorithm starts from calculating the value of return, followed by estimating value of volatility and drift, obtain the stock price forecast, calculating the forecast MAPE, calculating the stock expected price and calculating the confidence level of 95%. Based on the research, the output analysis shows that geometric Brownian motion model is the prediction technique with high rate of accuracy. It is proven with forecast MAPE value ≤ 20%.

  3. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Science.gov (United States)

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2014-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  4. Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.

    Directory of Open Access Journals (Sweden)

    Marco Congedo

    Full Text Available We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD matrices and their approximate joint diagonalization (AJD. Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.

  5. Hybrid clustering based fuzzy structure for vibration control - Part 1: A novel algorithm for building neuro-fuzzy system

    Science.gov (United States)

    Nguyen, Sy Dzung; Nguyen, Quoc Hung; Choi, Seung-Bok

    2015-01-01

    This paper presents a new algorithm for building an adaptive neuro-fuzzy inference system (ANFIS) from a training data set called B-ANFIS. In order to increase accuracy of the model, the following issues are executed. Firstly, a data merging rule is proposed to build and perform a data-clustering strategy. Subsequently, a combination of clustering processes in the input data space and in the joint input-output data space is presented. Crucial reason of this task is to overcome problems related to initialization and contradictory fuzzy rules, which usually happen when building ANFIS. The clustering process in the input data space is accomplished based on a proposed merging-possibilistic clustering (MPC) algorithm. The effectiveness of this process is evaluated to resume a clustering process in the joint input-output data space. The optimal parameters obtained after completion of the clustering process are used to build ANFIS. Simulations based on a numerical data, 'Daily Data of Stock A', and measured data sets of a smart damper are performed to analyze and estimate accuracy. In addition, convergence and robustness of the proposed algorithm are investigated based on both theoretical and testing approaches.

  6. An Algorithm of Building Extraction in Urban Area Based on Improved Top-hat Transformations and LBP Elevation Texture

    Directory of Open Access Journals (Sweden)

    HE Manyun

    2017-09-01

    Full Text Available Classification of building and vegetation is difficult solely by LiDAR data and vegetation in shadows can't be eliminated only by aerial images. The improved top-hat transformations and local binary patterns (LBP elevation texture analysis for building extraction are proposed based on the fusion of aerial images and LiDAR data. Firstly, LiDAR data is reorganized into grid cell, the algorithm removes ground points through top-hat transform. Then, the vegetation points are extracted by normalized difference vegetation index (NDVI. Thirdly, according to the elevation information of LiDAR points, LBP elevation texture is calculated and achieving precise elimination of vegetation in shadows or surrounding to the buildings. At last, morphological operations are used to fill the holes of building roofs, and region growing for complete building edges. The simulation is based on the complex urban area in Vaihingen benchmark provided by ISPRS, the results show that the algorithm affording higher classification accuracy.

  7. Efficient Geometric Sound Propagation Using Visibility Culling

    Science.gov (United States)

    Chandak, Anish

    2011-07-01

    Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying

  8. Control of disturbing loads in residential and commercial buildings via geometric algebra.

    Science.gov (United States)

    Castilla, Manuel-V

    2013-01-01

    Many definitions have been formulated to represent nonactive power for distorted voltages and currents in electronic and electrical systems. Unfortunately, no single universally suitable representation has been accepted as a prototype for this power component. This paper defines a nonactive power multivector from the most advanced multivectorial power theory based on the geometric algebra (GA). The new concept can have more importance on harmonic loads compensation, identification, and metering, between other applications. Likewise, this paper is concerned with a pioneering method for the compensation of disturbing loads. In this way, we propose a multivectorial relative quality index δ(~) associated with the power multivector. It can be assumed as a new index for power quality evaluation, harmonic sources detection, and power factor improvement in residential and commercial buildings. The proposed method consists of a single-point strategy based of a comparison among different relative quality index multivectors, which may be measured at the different loads on the same metering point. The comparison can give pieces of information with magnitude, direction, and sense on the presence of disturbing loads. A numerical example is used to illustrate the clear capabilities of the suggested approach.

  9. Implementation and efficiency of two geometric stiffening approaches

    International Nuclear Information System (INIS)

    Lugris, Urbano; Naya, Miguel A.; Perez, Jose A.; Cuadrado, Javier

    2008-01-01

    When the modeling of flexible bodies is required in multibody systems, the floating frame of reference formulations are probably the most efficient methods available. In the case of beams undergoing high speed rotations, the geometric stiffening effect can appear due to geometric nonlinearities, and it is often not captured by the aforementioned methods, since it is common to linearize the elastic forces assuming small deformations. The present work discusses the implementation of different existing methods developed to consider such geometric nonlinearities within a floating frame of reference formulation in natural coordinates, making emphasis on the relation between efficiency and accuracy of the resulting algorithms, seeking to provide practical criteria of use

  10. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    Science.gov (United States)

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  11. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    Science.gov (United States)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  12. APPLICATION OF FINITE ELEMENT METHOD TAKING INTO ACCOUNT PHYSICAL AND GEOMETRIC NONLINEARITY FOR THE CALCULATION OF PRESTRESSED REINFORCED CONCRETE BEAMS

    Directory of Open Access Journals (Sweden)

    Vladimir P. Agapov

    2017-01-01

    Full Text Available Abstract. Objectives Modern building codes prescribe the calculation of building structures taking into account the nonlinearity of deformation. To achieve this goal, the task is to develop a methodology for calculating prestressed reinforced concrete beams, taking into account physical and geometric nonlinearity. Methods The methodology is based on nonlinear calculation algorithms implemented and tested in the computation complex PRINS (a program for calculating engineering constructions for other types of construction. As a tool for solving this problem, the finite element method is used. Non-linear calculation of constructions is carried out by the PRINS computational complex using the stepwise iterative method. In this case, an equation is constructed and solved at the loading step, using modified Lagrangian coordinates. Results The basic formulas necessary for both the formation and the solution of a system of nonlinear algebraic equations by the stepwise iteration method are given, taking into account the loading, unloading and possible additional loading. A method for simulating prestressing is described by setting the temperature action on the reinforcement and stressing steel rod. Different approaches to accounting for physical and geometric nonlinearity of reinforced concrete beam rods are considered. A calculation example of a flat beam is given, in which the behaviour of the beam is analysed at various stages of its loading up to destruction. Conclusion A program is developed for the calculation of flat and spatially reinforced concrete beams taking into account the nonlinearity of deformation. The program is adapted to the computational complex PRINS and as part of this complex is available to a wide range of engineering, scientific and technical specialists. 

  13. An experimentally validated model for geometrically nonlinear plucking-based frequency up-conversion in energy harvesting

    Science.gov (United States)

    Kathpalia, B.; Tan, D.; Stern, I.; Erturk, A.

    2018-01-01

    It is well known that plucking-based frequency up-conversion can enhance the power output in piezoelectric energy harvesting by enabling cyclic free vibration at the fundamental bending mode of the harvester even for very low excitation frequencies. In this work, we present a geometrically nonlinear plucking-based framework for frequency up-conversion in piezoelectric energy harvesting under quasistatic excitations associated with low-frequency stimuli such as walking and similar rigid body motions. Axial shortening of the plectrum is essential to enable plucking excitation, which requires a nonlinear framework relating the plectrum parameters (e.g. overlap length between the plectrum and harvester) to the overall electrical power output. Von Kármán-type geometrically nonlinear deformation of the flexible plectrum cantilever is employed to relate the overlap length between the flexible (nonlinear) plectrum and the stiff (linear) harvester to the transverse quasistatic tip displacement of the plectrum, and thereby the tip load on the linear harvester in each plucking cycle. By combining the nonlinear plectrum mechanics and linear harvester dynamics with two-way electromechanical coupling, the electrical power output is obtained directly in terms of the overlap length. Experimental case studies and validations are presented for various overlap lengths and a set of electrical load resistance values. Further analysis results are reported regarding the combined effects of plectrum thickness and overlap length on the plucking force and harvested power output. The experimentally validated nonlinear plectrum-linear harvester framework proposed herein can be employed to design and optimize frequency up-conversion by properly choosing the plectrum parameters (geometry, material, overlap length, etc) as well as the harvester parameters.

  14. Building Up the Milky Way's Skeleton

    Science.gov (United States)

    Kohler, Susanna

    2016-09-01

    A team of scientistshas now uncovered half of theentire skeleton of the Milky Way, using an automated method to identify large filaments of gas and dust hiding between stars in the galactic plane.Galactic distribution of 54 newly discovered filaments, plotted along with colored lines indicating six relevant spiral arms in our galaxy. The upper two plots show the consistency of the filaments motion with the spiral arms, while the lower shows their location within the galactic plane. [Wang et al. 2016]The Search for Nessie and FriendsThe Milky Ways interstellar medium is structured hierarchically into filaments. These structures are difficult to observe since they largely lie in the galactic plane, but if we can discover the distribution and properties of these filaments, we can better understand how our galaxy formed, and how the filaments affect star formation in our galaxy today.Some of the largest of the Milky Ways filaments are hundreds of light-years long like the infrared dark cloud nicknamed Nessie, declared in 2013 to be one of the bones of the Milky Way because of its position along the center of the Scutum-Centaurus spiral arm.Follow-up studies since the discovery of Nessie (like this one, or this) have found a number of additional large-scale filaments, but these studies all use different search methods and selection criteria, and the searches all start with visual inspection by humans to identify candidates.What if we could instead automate the detection process and build a homogeneous sample of the large filaments making up the skeleton of the Milky Way?Automated DetectionThis is exactly what a team of astronomers led by Ke Wang (European Southern Observatory) has done. The group used a customization of an algorithm called a minimum spanning tree the technique used to optimize the cost of internet networks, road networks, and electrical grids in our communities to perform an automated search of data from the Bolocam Galactic Plane Survey. The search was

  15. Elastic-plastic dynamic analysis of a reactor building

    International Nuclear Information System (INIS)

    Umemura, Hajime; Tanaka, Hiroshi.

    1976-01-01

    The basic characteristics of the dynamic response of a reactor building to severe earthquake ground motion are very important for the evaluation of the safety of nuclear plant systems. A computer program for elastic-plastic dynamic analysis of reactor buildings using lumped mass models is developed. The box and cylindrical walls of boiling water reactor buildings are treated as vertical beams. The nonlinear moment-rotation and shear force-shear deformation relationships of walls are based in part upon the experiments of prototype structures. The geometrical non-linearity of the soil rocking spring due to foundation separation is also considered. The nonlinear equation of motion is expressed in incremental form using tangent stiffness matrices, following the algorithm developed by E.L. Wilson et al. The damping matrix in the equation is formulated as the combination of the energy evaluation method and Penzien-Wilson's approach to accomodate the different characteristics of soil and building damping. The analysis examples and the comparison of elastic and elastic-plastic analysis results are presented. (auth.)

  16. Geometric integrator for simulations in the canonical ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Sanders, David P., E-mail: dpsanders@ciencias.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Bravetti, Alessandro, E-mail: alessandro.bravetti@iimas.unam.mx [Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico)

    2016-08-28

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  17. Geometric integrator for simulations in the canonical ensemble

    International Nuclear Information System (INIS)

    Tapias, Diego; Sanders, David P.; Bravetti, Alessandro

    2016-01-01

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  18. Geometric Algebra Computing

    CERN Document Server

    Corrochano, Eduardo Bayro

    2010-01-01

    This book presents contributions from a global selection of experts in the field. This useful text offers new insights and solutions for the development of theorems, algorithms and advanced methods for real-time applications across a range of disciplines. Written in an accessible style, the discussion of all applications is enhanced by the inclusion of numerous examples, figures and experimental analysis. Features: provides a thorough discussion of several tasks for image processing, pattern recognition, computer vision, robotics and computer graphics using the geometric algebra framework; int

  19. Universal geometrical module for MARS program

    International Nuclear Information System (INIS)

    Talanov, V.V.

    1992-01-01

    Geometrical program module for modeling hadron and electromagnetic cascades, which accomplishes comparison of physical coordinates with the particle current state of one of the auxilliary cells, is described. The whole medium wherein the particles are tracked, is divided into a certain number of auxilliary cells. The identification algorithm of the cell, through which the particle trajectory passes, is considered in detail. The described algorithm for cell identification was developed for the MARS program and realized in form of a set of subprograms written in the FORTRAN language. 4 refs., 1 tab

  20. Geometrically Consistent Mesh Modification

    KAUST Repository

    Bonito, A.

    2010-01-01

    A new paradigm of adaptivity is to execute refinement, coarsening, and smoothing of meshes on manifolds with incomplete information about their geometry and yet preserve position and curvature accuracy. We refer to this collectively as geometrically consistent (GC) mesh modification. We discuss the concept of discrete GC, show the failure of naive approaches, and propose and analyze a simple algorithm that is GC and accuracy preserving. © 2010 Society for Industrial and Applied Mathematics.

  1. Optimal topology of urban buildings for maximization of annual solar irradiation availability using a genetic algorithm

    International Nuclear Information System (INIS)

    Conceição António, Carlos A.; Monteiro, João Brasileiro; Afonso, Clito Félix

    2014-01-01

    An approach based on the optimal placement of buildings that favors the use of solar energy is proposed. By maximizing the area of exposure to incident solar irradiation on roofs and facades of buildings, improvements on the energy performance of the urban matrix are reached, contributing decisively to reduce dependence on other less environmentally friendly energy options. A mathematical model is proposed to optimize the annual solar irradiation availability where the placement of the buildings in urban environment favors the use of solar energy resource. Improvements on the solar energy potential of the urban grid are reached by maximizing the exposure of incident solar irradiation on roofs and facades of buildings. The proposed model considers predominant, the amount of direct solar radiation, omitting the components of the solar irradiation diffused and reflected. The dynamic interaction of buildings on exposure to sunlight is simulated aiming to evaluate the shadowing zones. The incident solar irradiation simulation and the dynamic shading model were integrated in an optimization approach implemented numerically. The search for optimal topological solutions for urban grid is based on a Genetic Algorithm. The objective is to generate optimal scenarios for the placement of buildings into the urban grid in the pre-design phase, which enhances the use of solar irradiation. - Highlights: • A mathematical model is proposed to optimize annual solar irradiation availability. • Maximization of incident solar irradiation on roofs and facades of buildings. • Dynamic interaction of buildings is simulated aiming to evaluate shadowing zones. • Search for optimal topological solutions for urban grid based on genetic algorithm. • Solutions are compared with the conventional configurations for urban grid

  2. Multiscale geometric modeling of macromolecules I: Cartesian representation

    Science.gov (United States)

    Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei

    2014-01-01

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the

  3. Multiscale geometric modeling of macromolecules I: Cartesian representation

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Kelin [Department of Mathematics, Michigan State University, MI 48824 (United States); Feng, Xin [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Chen, Zhan [Department of Mathematics, Michigan State University, MI 48824 (United States); Tong, Yiying [Department of Computer Science and Engineering, Michigan State University, MI 48824 (United States); Wei, Guo-Wei, E-mail: wei@math.msu.edu [Department of Mathematics, Michigan State University, MI 48824 (United States); Department of Biochemistry and Molecular Biology, Michigan State University, MI 48824 (United States)

    2014-01-15

    This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace–Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the

  4. A hybrid Genetic Algorithm and Monte Carlo simulation approach to predict hourly energy consumption and generation by a cluster of Net Zero Energy Buildings

    International Nuclear Information System (INIS)

    Garshasbi, Samira; Kurnitski, Jarek; Mohammadi, Yousef

    2016-01-01

    Graphical abstract: The energy consumption and renewable generation in a cluster of NZEBs are modeled by a novel hybrid Genetic Algorithm and Monte Carlo simulation approach and used for the prediction of instantaneous and cumulative net energy balances and hourly amount of energy taken from and supplied to the central energy grid. - Highlights: • Hourly energy consumption and generation by a cluster of NZEBs was simulated. • Genetic Algorithm and Monte Carlo simulation approach were employed. • Dampening effect of energy used by a cluster of buildings was demonstrated. • Hourly amount of energy taken from and supplied to the grid was simulated. • Results showed that NZEB cluster was 63.5% grid dependant on annual bases. - Abstract: Employing a hybrid Genetic Algorithm (GA) and Monte Carlo (MC) simulation approach, energy consumption and renewable energy generation in a cluster of Net Zero Energy Buildings (NZEBs) was thoroughly investigated with hourly simulation. Moreover, the cumulative energy consumption and generation of the whole cluster and each individual building within the simulation space were accurately monitored and reported. The results indicate that the developed simulation algorithm is able to predict the total instantaneous and cumulative amount of energy taken from and supplied to the central energy grid over any time period. During the course of simulation, about 60–100% of total daily generated renewable energy was consumed by NZEBs and up to 40% of that was fed back into the central energy grid as surplus energy. The minimum grid dependency of the cluster was observed in June and July where 11.2% and 9.9% of the required electricity was supplied from the central energy grid, respectively. On the other hand, the NZEB cluster was strongly grid dependant in January and December by importing 70.7% and 76.1% of its required energy demand via the central energy grid, in the order given. Simulation results revealed that the cluster was 63

  5. Improved Algorithms Speed It Up for Codes

    International Nuclear Information System (INIS)

    Hazi, A

    2005-01-01

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leader for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics

  6. Characterizing heavy metal build-up on urban road surfaces: Implication for stormwater reuse

    International Nuclear Information System (INIS)

    Liu, An; Liu, Liang; Li, Dunzhu; Guan, Yuntao

    2015-01-01

    Stormwater reuse is increasingly popular in the worldwide. In terms of urban road stormwater, it commonly contains toxic pollutants such as heavy metals, which could undermine the reuse safety. The research study investigated heavy metal build-up characteristics on urban roads in a typical megacity of South China. The research outcomes show the high variability in heavy metal build-up loads among different urban road sites. The degree of traffic congestion and road surface roughness was found to exert a more significant influence on heavy metal build-up rather than traffic volume. Due to relatively higher heavy metal loads, stormwater from roads with more congested traffic conditions or rougher surfaces might be suitable for low-water-quality required activities while the stormwater from by-pass road sections could be appropriate for relatively high-water-quality required purposes since the stormwater could be relatively less polluted. Based on the research outcomes, a decision-making process for heavy metals based urban road stormwater reuse was proposed. The new finding highlights the importance to undertaking a “fit-for-purpose” road stormwater reuse strategy. Additionally, the research results can also contribute to enhancing stormwater reuse safety. - Highlights: • Heavy metal (HM) build-up varies with traffic and road surface conditions. • Traffic congestion and surface roughness exert a higher impact on HM build-up. • A “fit-for-purpose” strategy could suit urban road stormwater reuse

  7. Characterizing heavy metal build-up on urban road surfaces: Implication for stormwater reuse

    Energy Technology Data Exchange (ETDEWEB)

    Liu, An [Research Centre of Environmental Engineering and Management, Graduate School at Shenzhen, Tsinghua University, 518055 Shenzhen (China); Cooperative Research and Education Centre for Environmental Technology, Kyoto University–Tsinghua University, 518055 Shenzhen (China); Liu, Liang; Li, Dunzhu [Research Centre of Environmental Engineering and Management, Graduate School at Shenzhen, Tsinghua University, 518055 Shenzhen (China); Guan, Yuntao, E-mail: guanyt@tsinghua.edu.cn [Research Centre of Environmental Engineering and Management, Graduate School at Shenzhen, Tsinghua University, 518055 Shenzhen (China); School of Environment, Tsinghua University, Beijing 100084 (China)

    2015-05-15

    Stormwater reuse is increasingly popular in the worldwide. In terms of urban road stormwater, it commonly contains toxic pollutants such as heavy metals, which could undermine the reuse safety. The research study investigated heavy metal build-up characteristics on urban roads in a typical megacity of South China. The research outcomes show the high variability in heavy metal build-up loads among different urban road sites. The degree of traffic congestion and road surface roughness was found to exert a more significant influence on heavy metal build-up rather than traffic volume. Due to relatively higher heavy metal loads, stormwater from roads with more congested traffic conditions or rougher surfaces might be suitable for low-water-quality required activities while the stormwater from by-pass road sections could be appropriate for relatively high-water-quality required purposes since the stormwater could be relatively less polluted. Based on the research outcomes, a decision-making process for heavy metals based urban road stormwater reuse was proposed. The new finding highlights the importance to undertaking a “fit-for-purpose” road stormwater reuse strategy. Additionally, the research results can also contribute to enhancing stormwater reuse safety. - Highlights: • Heavy metal (HM) build-up varies with traffic and road surface conditions. • Traffic congestion and surface roughness exert a higher impact on HM build-up. • A “fit-for-purpose” strategy could suit urban road stormwater reuse.

  8. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  9. AUGMENTING 3D CITY MODEL COMPONENTS BY GEODATA JOINS TO FACILITATE AD-HOC GEOMETRIC-TOPOLOGICALLY SOUND INTEGRATION

    Directory of Open Access Journals (Sweden)

    R. Kaden

    2012-07-01

    Full Text Available Virtual 3D city models are integrated complex compositions of spatial data of different themes, origin, quality, scale, and dimensions. Within this paper, we address the problem of spatial compatibility of geodata aiming to provide support for ad-hoc integration of virtual 3D city models including geodata of different sources and themes like buildings, terrain, and city furniture. In contrast to related work which is dealing with the integration of redundant geodata structured according to different data models and ontologies, we focus on the integration of complex 3D models of the same representation (here: CityGML but regarding to the geometric-topological consistent matching of non-homologous objects, e.g. a building is connected to a road, and their geometric homogenisation. Therefore, we present an approach including a data model for a Geodata Join and the general concept of an integration procedure using the join information. The Geodata Join aims to bridge the lack of information between fragmented geodata by describing the relationship between adjacent objects from different datasets. The join information includes the geometrical representation of those parts of an object, which have a specific/known topological or geometrical relationship to another object. This part is referred to as a Connector and is either described by points, lines, or surfaces of the existing object geometry or by additional join geometry. In addition, the join information includes the specification of the connected object in the other dataset and the description of the topological and geometrical relationship between both objects, which is used to aid the matching process. Furthermore, the Geodata Join contains object-related information like accuracy values and restrictions of movement and deformation which are used to optimize the integration process. Based on these parameters, a functional model including a matching algorithm, transformation methods, and

  10. Symplectic Geometric Algorithms for Hamiltonian Systems

    CERN Document Server

    Feng, Kang

    2010-01-01

    "Symplectic Geometry Algorithms for Hamiltonian Systems" will be useful not only for numerical analysts, but also for those in theoretical physics, computational chemistry, celestial mechanics, etc. The book generalizes and develops the generating function and Hamilton-Jacobi equation theory from the perspective of the symplectic geometry and symplectic algebra. It will be a useful resource for engineers and scientists in the fields of quantum theory, astrophysics, atomic and molecular dynamics, climate prediction, oil exploration, etc. Therefore a systematic research and development

  11. Regularities of radiation defects build up on oxide materials surface

    International Nuclear Information System (INIS)

    Bitenbaev, M.I.; Polyakov, A.I.; Tuseev, T.

    2005-01-01

    Analysis of experimental data by radiation defects study on different oxide elements (silicon, beryllium, aluminium, rare earth elements) irradiated by the photo-, gamma-, neutron-, alpha- radiation, protons and helium ions show, that gas adsorption process on the surface centers and radiation defects build up in metal oxide correlated between themselves. These processes were described by the equivalent kinetic equations for analysis of radiation defects build up in the different metal oxides. It was revealed in the result of the analysis: number of radiation defects are droningly increasing up to limit value with the treatment temperature growth. Constant of radicals death at ionizing radiation increases as well. Amount of surface defects in different oxides defining absorbing activity of these materials looks as: silicon oxide→beryllium oxide→aluminium oxide. So it was found, that most optimal material for absorbing system preparation is silicon oxide by it power intensity and berylium oxide by it adsorption efficiency

  12. Geometrical superresolved imaging using nonperiodic spatial masking.

    Science.gov (United States)

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  13. Simulating geometrically complex blast scenarios

    Directory of Open Access Journals (Sweden)

    Ian G. Cullis

    2016-04-01

    Full Text Available The effects of blast waves generated by energetic and non-energetic sources are of continuing interest to the ballistics research community. Modern conflicts are increasingly characterised by asymmetric urban warfare, with improvised explosive devices (IEDs often playing a dominant role on the one hand and an armed forces requirement for minimal collateral effects from their weapons on the other. These problems are characterised by disparate length- and time-scales and may also be governed by complex physics. There is thus an increasing need to be able to rapidly assess and accurately predict the effects of energetic blast in topologically complex scenarios. To this end, this paper presents a new QinetiQ-developed advanced computational package called EAGLE-Blast, which is capable of accurately resolving the generation, propagation and interaction of blast waves around geometrically complex shapes such as vehicles and buildings. After a brief description of the numerical methodology, various blast scenario simulations are described and the results compared with experimental data to demonstrate the validation of the scheme and its ability to describe these complex scenarios accurately and efficiently. The paper concludes with a brief discussion on the use of the code in supporting the development of algorithms for fast running engineering models.

  14. The design of control algorithm for automatic start-up model of HWRR

    International Nuclear Information System (INIS)

    Guo Wenqi

    1990-01-01

    The design of control algorithm for automatic start-up model of HWRR (Heavy Water Research Reactor), the calculation of μ value and the application of digital compensator are described. Finally The flow diagram of the automatic start-up and digital compensator program for HWRR are given

  15. Symmetry and Algorithmic Complexity of Polyominoes and Polyhedral Graphs

    KAUST Repository

    Zenil, Hector

    2018-02-24

    We introduce a definition of algorithmic symmetry able to capture essential aspects of geometric symmetry. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov-Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumeration all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity---both theoretical and numerical---with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize properties of polyominoes, polytopes, regular and quasi-regular polyhedra as well as polyhedral networks, thereby demonstrating its profiling capabilities.

  16. Symmetry and Algorithmic Complexity of Polyominoes and Polyhedral Graphs

    KAUST Repository

    Zenil, Hector; Kiani, Narsis A.; Tegner, Jesper

    2018-01-01

    We introduce a definition of algorithmic symmetry able to capture essential aspects of geometric symmetry. We review, study and apply a method for approximating the algorithmic complexity (also known as Kolmogorov-Chaitin complexity) of graphs and networks based on the concept of Algorithmic Probability (AP). AP is a concept (and method) capable of recursively enumeration all properties of computable (causal) nature beyond statistical regularities. We explore the connections of algorithmic complexity---both theoretical and numerical---with geometric properties mainly symmetry and topology from an (algorithmic) information-theoretic perspective. We show that approximations to algorithmic complexity by lossless compression and an Algorithmic Probability-based method can characterize properties of polyominoes, polytopes, regular and quasi-regular polyhedra as well as polyhedral networks, thereby demonstrating its profiling capabilities.

  17. Adaptive algorithm of magnetic heading detection

    Science.gov (United States)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  18. Discrete geometric analysis of message passing algorithm on graphs

    Science.gov (United States)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  19. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  20. Build-up dynamics of heavy metals deposited on impermeable urban surfaces.

    Science.gov (United States)

    Wicke, D; Cochrane, T A; O'Sullivan, A

    2012-12-30

    A method using thin boards (3 cm thick, 0.56 m(2)) comprising different paving materials typically used in urban environments (2 asphalt types and concrete) was employed to specifically investigate air-borne deposition dynamics of TSS, zinc, copper and lead. Boards were exposed at an urban car park near vehicular traffic to determine the rate of contaminant build-up over a 13-day dry period. Concentration profiles from simulated rainfall wash-off were used to determine contaminant yields at different antecedent dry days. Maximum contaminant yields after 13 days of exposure were 2.7 kg ha(-1) for TSS, 35 g ha(-1) zinc, 2.3 g ha(-1) copper and 0.4 g ha(-1) lead. Accumulation of all contaminants increased over the first week and levelled off thereafter, supporting theoretical assumptions that contaminant accumulation on impervious surfaces asymptotically approaches a maximum. Comparison of different surface types showed approximately four times higher zinc concentrations in runoff from asphalt surfaces and two times higher TSS concentrations in runoff from concrete, which is attributed to different physical and chemical compositions of the pavement types. Contaminant build-up and wash-off behaviours were modelled using exponential and saturation functions commonly applied in the US EPA's Stormwater Management Model (SWMM) showing good correlation between measured and modelled concentrations. Maximum build-up, half-saturation time, build-up rate constants and wash-off coefficients, necessary for stormwater contaminant modelling, were determined for the four contaminants studied. These parameters are required to model contaminant concentrations in urban runoff assisting in stormwater management decisions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Geometric Representations of Condition Queries on Three-Dimensional Vector Fields

    Science.gov (United States)

    Henze, Chris

    1999-01-01

    Condition queries on distributed data ask where particular conditions are satisfied. It is possible to represent condition queries as geometric objects by plotting field data in various spaces derived from the data, and by selecting loci within these derived spaces which signify the desired conditions. Rather simple geometric partitions of derived spaces can represent complex condition queries because much complexity can be encapsulated in the derived space mapping itself A geometric view of condition queries provides a useful conceptual unification, allowing one to intuitively understand many existing vector field feature detection algorithms -- and to design new ones -- as variations on a common theme. A geometric representation of condition queries also provides a simple and coherent basis for computer implementation, reducing a wide variety of existing and potential vector field feature detection techniques to a few simple geometric operations.

  2. The impact of urban open space and 'lift-up' building design on building intake fraction and daily pollutant exposure in idealized urban models.

    Science.gov (United States)

    Sha, Chenyuan; Wang, Xuemei; Lin, Yuanyuan; Fan, Yifan; Chen, Xi; Hang, Jian

    2018-08-15

    Sustainable urban design is an effective way to improve urban ventilation and reduce vehicular pollutant exposure to urban residents. This paper investigated the impacts of urban open space and 'lift-up' building design on vehicular CO (carbon monoxide) exposure in typical three-dimensional (3D) urban canopy layer (UCL) models under neutral atmospheric conditions. The building intake fraction (IF) represents the fraction of total vehicular pollutant emissions inhaled by residents when they stay at home. The building daily CO exposure (E t ) means the extent of human beings' contact with CO within one day indoor at home. Computational fluid dynamics (CFD) simulations integrating with these two concepts were performed to solve turbulent flow and assess vehicular CO exposure to urban residents. CFD technique with the standard k-ε model was successfully validated by wind tunnel data. The initial numerical UCL model consists of 5-row and 5-column (5×5) cubic buildings (building height H=street width W=30m) with four approaching wind directions (θ=0°, 15°, 30°, 45°). In Group I, one of the 25 building models is removed to attain urban open space settings. In Group II, the first floor (Lift-up1), or second floor (Lift-up2), or third floor (Lift-up3) of all buildings is elevated respectively to create wind pathways through buildings. Compared to the initial case, urban open space can slightly or significantly reduce pollutant exposure for urban residents. As θ=30° and 45°, open space settings are more effective to reduce pollutant exposure than θ=0° and 15°.The pollutant dilution near or surrounding open space and in its adjacent downstream regions is usually enhanced. Lift-up1 and Lift-up2 experience much greater pollutant exposure reduction in all wind directions than Lift-up3 and open space. Although further investigations are still required to provide practical guidelines, this study is one of the first attempts for reducing urban pollutant exposure by

  3. Basic principle of constant q/sub a/ current build-up in tokamaks

    International Nuclear Information System (INIS)

    Kikuchi, M.

    1985-05-01

    An analytic expression is derived such that the current profile shape is kept constant during the current build-up phase in tokamaks. The required conductivity profile is parametrized by two externally controllable parameters, I/sub p/ and a/sub p/ in the case of the Gaussian current profile. It is shown that a Gaussian current profile can be maintained for a realistically broad conductivity profile by using the constant q/sub a/ current build-up method even under the condition of a high I/sub p/

  4. Electricity usage scheduling in smart building environments using smart devices.

    Science.gov (United States)

    Lee, Eunji; Bahn, Hyokyung

    2013-01-01

    With the recent advances in smart grid technologies as well as the increasing dissemination of smart meters, the electricity usage of every moment can be detected in modern smart building environments. Thus, the utility company adopts different price of electricity at each time slot considering the peak time. This paper presents a new electricity usage scheduling algorithm for smart buildings that adopts real-time pricing of electricity. The proposed algorithm detects the change of electricity prices by making use of a smart device and changes the power mode of each electric device dynamically. Specifically, we formulate the electricity usage scheduling problem as a real-time task scheduling problem and show that it is a complex search problem that has an exponential time complexity. An efficient heuristic based on genetic algorithms is performed on a smart device to cut down the huge searching space and find a reasonable schedule within a feasible time budget. Experimental results with various building conditions show that the proposed algorithm reduces the electricity charge of a smart building by 25.6% on average and up to 33.4%.

  5. Refurbishment and school buildings management in a smart building environment

    Science.gov (United States)

    Di Giuda, Giuseppe Martino; Villa, Valentina; Tagliabue, Lavinia Chiara; Giana, Paolo Ettore; Rinaldi, Stefano; Ciribini, Angelo Luigi Camillo

    2018-05-01

    Building Information Modelling is a methodology, which is able to take into account many data, both geometrical and non-geometrical, in order to evaluate at the actual condition of the asset. The project has the scope of evaluating the conditions of different school buildings, in order to develop a way to choose the best-tailored management solution to the owner. A further step is the management and planning of design solutions during the life cycle customized on monitored buildings' conditions. The research work focuses on providing a support decisions concerning the gap between the present building state laws and the current state of the existing buildings. The process will be developed in an expanded BIM environment, using sensors, which will give back the state of the consistency of the actual conditions to enable the buildings to adapt themselves in the best way into their specific constraints and boundaries. The results of the study are (i) a complete workflow to make decision and the possibility to shape the decision process on an objective through a scientific approach, (ii) evaluate the current state of the asset and (iii) manage maintenance in the lifespan. Further development will take in consideration all the aspects related to management of big data environment generated by a smart buildings system.

  6. Study on the Development of an Optimal Heat Supply Control Algorithm for Group Energy Apartment Buildings According to the Variation of Outdoor Air Temperature

    Directory of Open Access Journals (Sweden)

    Dong-Kurl Kwak

    2012-05-01

    Full Text Available In the present study, we have developed an optimal heat supply algorithm which minimizes the heat loss through the distribution pipe line in a group energy apartment. Heating load variation of a group energy apartment building according to the outdoor air temperature was predicted by a correlation obtained from calorimetry measurements of all households in the apartment building. Supply water temperature and mass flow rate were simultaneously controlled to minimize the heat loss rate through the distribution pipe line. A group heating apartment building located in Hwaseong city, Korea, which has 1473 households, was selected as the object building to test the present heat supply algorithm. Compared to the original heat supply system, the present system adopting the proposed control algorithm reduced the heat loss rate by 10.4%.

  7. Enhancements to AERMOD’s Building Downwash Algorithms based on Wind-Tunnel and Embedded-LES Modeling -BDW-2

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data set is associated with the results found in the journal article: Monbureau et al, 2018. Enjancements to AERMOD's building downwash algorithms based on...

  8. Analysis of the build-up of semi and non volatile organic compounds on urban roads.

    Science.gov (United States)

    Mahbub, Parvez; Ayoko, Godwin A; Goonetilleke, Ashantha; Egodawatta, Prasanna

    2011-04-01

    Vehicular traffic in urban areas may adversely affect urban water quality through the build-up of traffic generated semi and non volatile organic compounds (SVOCs and NVOCs) on road surfaces. The characterisation of the build-up processes is the key to developing mitigation measures for the removal of such pollutants from urban stormwater. An in-depth analysis of the build-up of SVOCs and NVOCs was undertaken in the Gold Coast region in Australia. Principal Component Analysis (PCA) and Multicriteria Decision tools such as PROMETHEE and GAIA were employed to understand the SVOC and NVOC build-up under combined traffic scenarios of low, moderate, and high traffic in different land uses. It was found that congestion in the commercial areas and use of lubricants and motor oils in the industrial areas were the main sources of SVOCs and NVOCs on urban roads, respectively. The contribution from residential areas to the build-up of such pollutants was hardly noticeable. It was also revealed through this investigation that the target SVOCs and NVOCs were mainly attached to particulate fractions of 75-300 μm whilst the redistribution of coarse fractions due to vehicle activity mainly occurred in the >300 μm size range. Lastly, under combined traffic scenario, moderate traffic with average daily traffic ranging from 2300 to 5900 and average congestion of 0.47 were found to dominate SVOC and NVOC build-up on roads. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  10. approche algorithme génétique

    African Journals Online (AJOL)

    Structure / acentric factor relationship of alcohols and phenols: genetic ... descriptors of geometrical type selected by genetic algorithm, among more than 1600 ..... Practical handbook of genetic algorithms: Applications Volume I; CRC Press.

  11. Algorithmic foundation of multi-scale spatial representation

    CERN Document Server

    Li, Zhilin

    2006-01-01

    With the widespread use of GIS, multi-scale representation has become an important issue in the realm of spatial data handling. However, no book to date has systematically tackled the different aspects of this discipline. Emphasizing map generalization, Algorithmic Foundation of Multi-Scale Spatial Representation addresses the mathematical basis of multi-scale representation, specifically, the algorithmic foundation.Using easy-to-understand language, the author focuses on geometric transformations, with each chapter surveying a particular spatial feature. After an introduction to the essential operations required for geometric transformations as well as some mathematical and theoretical background, the book describes algorithms for a class of point features/clusters. It then examines algorithms for individual line features, such as the reduction of data points, smoothing (filtering), and scale-driven generalization, followed by a discussion of algorithms for a class of line features including contours, hydrog...

  12. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  13. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. Optimization of Energy Efficiency and Conservation in Green Building Design Using Duelist, Killer-Whale and Rain-Water Algorithms

    Science.gov (United States)

    Biyanto, T. R.; Matradji; Syamsi, M. N.; Fibrianto, H. Y.; Afdanny, N.; Rahman, A. H.; Gunawan, K. S.; Pratama, J. A. D.; Malwindasari, A.; Abdillah, A. I.; Bethiana, T. N.; Putra, Y. A.

    2017-11-01

    The development of green building has been growing in both design and quality. The development of green building was limited by the issue of expensive investment. Actually, green building can reduce the energy usage inside the building especially in utilization of cooling system. External load plays major role in reducing the usage of cooling system. External load is affected by type of wall sheathing, glass and roof. The proper selection of wall, type of glass and roof material are very important to reduce external load. Hence, the optimization of energy efficiency and conservation in green building design is required. Since this optimization consist of integer and non-linear equations, this problem falls into Mixed-Integer-Non-Linear-Programming (MINLP) that required global optimization technique such as stochastic optimization algorithms. In this paper the optimized variables i.e. type of glass and roof were chosen using Duelist, Killer-Whale and Rain-Water Algorithms to obtain the optimum energy and considering the minimal investment. The optimization results exhibited the single glass Planibel-G with the 3.2 mm thickness and glass wool insulation provided maximum ROI of 36.8486%, EUI reduction of 54 kWh/m2·year, CO2 emission reduction of 486.8971 tons/year and reduce investment of 4,078,905,465 IDR.

  15. Build-up of actinides in irradiated fuel rods of the ET-RR-1 reactor

    Energy Technology Data Exchange (ETDEWEB)

    Adib, M.; Naguib, K.; Morcos, H.N

    2001-09-01

    The content concentrations of actinides are calculated as a function of operating reactor regime and cooling time at different percentage of fuel burn-up. The build-up transmutation equations of actinides content in an irradiated fuel are solved numerically .A computer code BAC was written to operate on a PC computer to provide the required calculations. The fuel element of 10% {sup 235}U enrichment of ET-RR-1 reactor was taken as an example for calculations using the BAC code. The results are compared with other calculations for the ET-RR-1 fuel rod. An estimation of fissile build-up content of a proposed new fuel of 20% {sup 235}U enrichment for ET-RR-1 reactor is given. The sensitivity coefficients of build-up plutonium concentrations as a function of cross-section data uncertainties are also calculated.

  16. Methods for teaching geometric modelling and computer graphics

    Energy Technology Data Exchange (ETDEWEB)

    Rotkov, S.I.; Faitel`son, Yu. Ts.

    1992-05-01

    This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.

  17. How reliable are geometry-based building indices as thermal performance indicators?

    International Nuclear Information System (INIS)

    Rodrigues, Eugénio; Amaral, Ana Rita; Gaspar, Adélio Rodrigues; Gomes, Álvaro

    2015-01-01

    Highlights: • Geometry-based building indices are tested in different European climate regions. • Building design programs are used to randomly generate sets of simulation models. • Some indices correlate in specific climates and design programs. • Shape-based Relative Compactness presented the best correlation of all indices. • Window-to-Surface Ratio was the window-based index with best correlation. - Abstract: Architects and urban planners have been relying on geometry-based indices to design more energy efficient buildings for years. The advantage of such indices is their ease of use and capability to capture the relation of a few geometric variables with the building’s performance. However, such relation is usually found using only a few simple building models and considering only a few climate regions. This paper presents the analysis of six geometry-based building indices to determine their adequacy in eight different climate regions in Europe. For each location, three residential building design programs were used as building specifications. Two algorithms were employed to randomly generate and assess the thermal performance of three sets of 500 alternative building models. The results show that geometry-based indices only correlate with the buildings’ thermal performance according to specific climate regions and building design programs

  18. Development and validation of a novel large field of view phantom and a software module for the quality assurance of geometric distortion in magnetic resonance imaging.

    Science.gov (United States)

    Torfeh, Tarraf; Hammoud, Rabih; McGarry, Maeve; Al-Hammadi, Noora; Perkins, Gregory

    2015-09-01

    To develop and validate a large field of view phantom and quality assurance software tool for the assessment and characterization of geometric distortion in MRI scanners commissioned for radiation therapy planning. A purpose built phantom was developed consisting of 357 rods (6mm in diameter) of polymethyl-methacrylat separated by 20mm intervals, providing a three dimensional array of control points at known spatial locations covering a large field of view up to a diameter of 420mm. An in-house software module was developed to allow automatic geometric distortion assessment. This software module was validated against a virtual dataset of the phantom that reproduced the exact geometry of the physical phantom, but with known translational and rotational displacements and warping. For validation experiments, clinical MRI sequences were acquired with and without the application of a commercial 3D distortion correction algorithm (Gradwarp™). The software module was used to characterize and assess system-related geometric distortion in the sequences relative to a benchmark CT dataset, and the efficacy of the vendor geometric distortion correction algorithms (GDC) was also assessed. Results issued from the validation of the software against virtual images demonstrate the algorithm's ability to accurately calculate geometric distortion with sub-pixel precision by the extraction of rods and quantization of displacements. Geometric distortion was assessed for the typical sequences used in radiotherapy applications and over a clinically relevant 420mm field of view (FOV). As expected and towards the edges of the field of view (FOV), distortion increased with increasing FOV. For all assessed sequences, the vendor GDC was able to reduce the mean distortion to below 1mm over a field of view of 5, 10, 15 and 20cm radius respectively. Results issued from the application of the developed phantoms and algorithms demonstrate a high level of precision. The results indicate that this

  19. Intel Legend and CERN would build up high speed Internet

    CERN Multimedia

    2002-01-01

    Intel, Legend and China Education and Research Network jointly announced on the 25th of April that they will be cooperating with each other to build up the new generation high speed internet, over the next three years (1/2 page).

  20. An Algorithm for Building an Electronic Database.

    Science.gov (United States)

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  1. Thermal comfort in residential buildings: Comfort values and scales for building energy simulation

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, Leen; D' haeseleer, William [Division of Applied Mechanics and Energy Conversion, University of Leuven (K.U.Leuven), Celestijnenlaan 300 A, B-3001 Leuven (Belgium); Dear, Richard de [Division of Environmental and Life Sciences, Macquarie University, Sydney (Australia); Hensen, Jan [Faculty of Architecture, Building and Planning, Technische Universiteit Eindhoven, Vertigo 6.18, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2009-05-15

    Building Energy Simulation (BES) programmes often use conventional thermal comfort theories to make decisions, whilst recent research in the field of thermal comfort clearly shows that important effects are not incorporated. The conventional theories of thermal comfort were set up based on steady state laboratory experiments. This, however, is not representing the real situation in buildings, especially not when focusing on residential buildings. Therefore, in present analysis, recent reviews and adaptations are considered to extract acceptable temperature ranges and comfort scales. They will be defined in an algorithm, easily implementable in any BES code. The focus is on comfortable temperature levels in the room, more than on the detailed temperature distribution within that room. (author)

  2. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  3. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  4. Modelling heavy metals build-up on urban road surfaces for effective stormwater reuse strategy implementation

    International Nuclear Information System (INIS)

    Hong, Nian; Zhu, Panfeng; Liu, An

    2017-01-01

    Urban road stormwater is an alternative water resource to mitigate water shortage issues in the worldwide. Heavy metals deposited (build-up) on urban road surface can enter road stormwater runoff, undermining stormwater reuse safety. As heavy metal build-up loads perform high variabilities in terms of spatial distribution and is strongly influenced by surrounding land uses, it is essential to develop an approach to identify hot-spots where stormwater runoff could include high heavy metal concentrations and hence cannot be reused if it is not properly treated. This study developed a robust modelling approach to estimating heavy metal build-up loads on urban roads using land use fractions (representing percentages of land uses within a given area) by an artificial neural network (ANN) model technique. Based on the modelling results, a series of heavy metal load spatial distribution maps and a comprehensive ecological risk map were generated. These maps provided a visualization platform to identify priority areas where the stormwater can be safely reused. Additionally, these maps can be utilized as an urban land use planning tool in the context of effective stormwater reuse strategy implementation. - Highlights: • A model was developed to simulate heavy metal build-up loads on urban roads. • This model is based on artificial neural networks. • Land use fractions was used to model build-up loads on different particle sizes. • The maps of heavy metal spatial distribution and ecological risk were generated. • This model can be used for effective stormwater reuse strategy implementation. - Development of a robust modelling approach to mapping heavy metals build-up and their ecological risks for stormwater reuse safety.

  5. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  6. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics

    Science.gov (United States)

    Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie

    2008-08-01

    Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

  7. Accurate technique for complete geometric calibration of cone-beam computed tomography systems

    International Nuclear Information System (INIS)

    Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.

    2005-01-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in

  8. Analysis of surface and build up region dose for motorized wedge and omni wedge

    International Nuclear Information System (INIS)

    Panta, Raj Kumar; Sundarum, T.

    2008-01-01

    Megavoltage x-ray beam exhibits the well known phenomenon of dose build-up within the first few millimeters of incident phantom surface or skin. The skin sparing effect of high energy gamma or x-ray photon may be reduced or even lost, if the beam is contaminated with electron or low energy photons. Since skin dose in the treatment of deeply seated tumor may be a limiting factor in the delivery of tumoricidal dose due to possible complications such as erythema, desquamation, fibrosis, necrosis and epilation, the dose distribution in the build up region should be known. The objective of this study was to measure and investigate the surface and build-up region dose for 6 MV and 15 MV photon beam for Motorized wedge and Omni wedge in Precise Digital Linear Accelerator (Elekta)

  9. Toward a Progress Indicator for Machine Learning Model Building and Data Mining Algorithm Execution: A Position Paper

    Science.gov (United States)

    Luo, Gang

    2017-01-01

    For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic. PMID:29177022

  10. Geometrical primitives reconstruction from image sequence in an interactive context

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    We propose a method to recover 3D geometrical shape from image sequence, in a context of man machine co-operation. The human operator has to point out the edges of an object in the first image and choose a corresponding geometrical model. The algorithm tracks each relevant 2D segments describing surface discontinuities or limbs, in the images. Then, knowing motion of the camera between images, the positioning and the size of the virtual object are deduced by minimising a function. The function describes how well the virtual objects is linked to the extracted segments of the sequence, its geometrical model and pieces of information given by the operator. (author). 13 refs., 7 figs., 8 tabs

  11. Correction of build-up factor one x-ray hvl measurement

    International Nuclear Information System (INIS)

    Yuliati, Helfi; Akhadi, Mukhlis

    2000-01-01

    Research to obtain the value build-up factor (b) on half value layers (HVL) measurement of diagnostic X-Rays using pocket dosimeter behind aluminium (AI) filter with its thickness vary from 1 to 4 mm. From the measurement it was obtained HVL value of 1.997, 2.596 and 2.718 mmAI for X-Rays of kVp : 80 Kv with 1, 2, 3 and 4 mm filter thickness respectively. HVL value significantly increase with increasing AI filter thickness. Increasing of HVL means increasing filter thickness. From the calculation it was obtained increasing b value relative to 1 mm AI filter of 18.26 and 46% for filter thickness of 2, 3 and 4 mm respectively. Experiment result shows the need of involving b value in HVL calculation of X-Rays if the filter is relatively thick. Calculation of HVL of X-Rays can be carried out with thin layers filter. Key words : x-rays, half value layer, build up factor

  12. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  13. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  14. From the ground up: building a minimally invasive aortic valve surgery program

    OpenAIRE

    Nguyen, Tom C.; Lamelas, Joseph

    2015-01-01

    Minimally invasive aortic valve replacement (MIAVR) is associated with numerous advantages including improved patient satisfaction, cosmesis, decreased transfusion requirements, and cost-effectiveness. Despite these advantages, little information exists on how to build a MIAVR program from the ground up. The steps to build a MIAVR program include compiling a multi-disciplinary team composed of surgeons, cardiologists, anesthesiologists, perfusionists, operating room (OR) technicians, and nurs...

  15. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  16. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems.

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-22

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K -center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  17. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Directory of Open Access Journals (Sweden)

    Shuqiang Huang

    2017-01-01

    Full Text Available Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest and the population optimum (gbest; thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms.

  18. Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems

    Science.gov (United States)

    Huang, Shuqiang; Tao, Ming

    2017-01-01

    Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. PMID:28117735

  19. Optimal Route Searching with Multiple Dynamical Constraints—A Geometric Algebra Approach

    Directory of Open Access Journals (Sweden)

    Dongshuang Li

    2018-05-01

    Full Text Available The process of searching for a dynamic constrained optimal path has received increasing attention in traffic planning, evacuation, and personalized or collaborative traffic service. As most existing multiple constrained optimal path (MCOP methods cannot search for a path given various types of constraints that dynamically change during the search, few approaches for dynamic multiple constrained optimal path (DMCOP with type II dynamics are available for practical use. In this study, we develop a method to solve the DMCOP problem with type II dynamics based on the unification of various types of constraints under a geometric algebra (GA framework. In our method, the network topology and three different types of constraints are represented by using algebraic base coding. With a parameterized optimization of the MCOP algorithm based on a greedy search strategy under the generation-refinement paradigm, this algorithm is found to accurately support the discovery of optimal paths as the constraints of numerical values, nodes, and route structure types are dynamically added to the network. The algorithm was tested with simulated cases of optimal tourism route searches in China’s road networks with various combinations of constraints. The case study indicates that our algorithm can not only solve the DMCOP with different types of constraints but also use constraints to speed up the route filtering.

  20. Geometric analysis of alloreactive HLA α-helices.

    Science.gov (United States)

    Ribarics, Reiner; Karch, Rudolf; Ilieva, Nevena; Schreiner, Wolfgang

    2014-01-01

    Molecular dynamics (MD) is a valuable tool for the investigation of functional elements in biomolecules, providing information on dynamic properties and processes. Previous work by our group has characterized static geometric properties of the two MHC α-helices comprising the peptide binding region recognized by T cells. We build upon this work and used several spline models to approximate the overall shape of MHC α-helices. We applied this technique to a series of MD simulations of alloreactive MHC molecules that allowed us to capture the dynamics of MHC α-helices' steric configurations. Here, we discuss the variability of spline models underlying the geometric analysis with varying polynomial degrees of the splines.

  1. Fifth SIAM conference on geometric design 97: Final program and abstracts. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    The meeting was divided into the following sessions: (1) CAD/CAM; (2) Curve/Surface Design; (3) Geometric Algorithms; (4) Multiresolution Methods; (5) Robotics; (6) Solid Modeling; and (7) Visualization. This report contains the abstracts of papers presented at the meeting. Proceding the conference there was a short course entitled ``Wavelets for Geometric Modeling and Computer Graphics``.

  2. Morphing of geometric composites via residual swelling.

    Science.gov (United States)

    Pezzulla, Matteo; Shillig, Steven A; Nardinocchi, Paola; Holmes, Douglas P

    2015-08-07

    Understanding and controlling the shape of thin, soft objects has been the focus of significant research efforts among physicists, biologists, and engineers in the last decade. These studies aim to utilize advanced materials in novel, adaptive ways such as fabricating smart actuators or mimicking living tissues. Here, we present the controlled growth-like morphing of 2D sheets into 3D shapes by preparing geometric composite structures that deform by residual swelling. The morphing of these geometric composites is dictated by both swelling and geometry, with diffusion controlling the swelling-induced actuation, and geometric confinement dictating the structure's deformed shape. Building on a simple mechanical analog, we present an analytical model that quantitatively describes how the Gaussian and mean curvatures of a thin disk are affected by the interplay among geometry, mechanics, and swelling. This model is in excellent agreement with our experiments and numerics. We show that the dynamics of residual swelling is dictated by a competition between two characteristic diffusive length scales governed by geometry. Our results provide the first 2D analog of Timoshenko's classical formula for the thermal bending of bimetallic beams - our generalization explains how the Gaussian curvature of a 2D geometric composite is affected by geometry and elasticity. The understanding conferred by these results suggests that the controlled shaping of geometric composites may provide a simple complement to traditional manufacturing techniques.

  3. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  4. A walk through the planned CS building. M.S. Thesis

    Science.gov (United States)

    Khorramabadi, Delnaz

    1991-01-01

    Using the architectural plan views of our future computer science building as test objects, we have completed the first stage of a Building walkthrough system. The inputs to our system are AutoCAD files. An AutoCAD converter translates the geometrical information in these files into a format suitable for 3D rendering. Major model errors, such as incorrect polygon intersections and random face orientations, are detected and fixed automatically. Interactive viewing and editing tools are provided to view the results, to modify and clean the model and to change surface attributes. Our display system provides a simple-to-use user interface for interactive exploration of buildings. Using only the mouse buttons, the user can move inside and outside the building and change floors. Several viewing and rendering options are provided, such as restricting the viewing frustum, avoiding wall collisions, and selecting different rendering algorithms. A plan view of the current floor, with the position of the eye point and viewing direction on it, is displayed at all times. The scene illumination can be manipulated, by interactively controlling intensity values for 5 light sources.

  5. Comparative performance analysis of the artificial-intelligence-based thermal control algorithms for the double-skin building

    International Nuclear Information System (INIS)

    Moon, Jin Woo

    2015-01-01

    This study aimed at developing artificial-intelligence-(AI)-theory-based optimal control algorithms for improving the indoor temperature conditions and heating energy efficiency of the double-skin buildings. For this, one conventional rule-based and four AI-based algorithms were developed, including artificial neural network (ANN), fuzzy logic (FL), and adaptive neuro fuzzy inference systems (ANFIS), for operating the surface openings of the double skin and the heating system. A numerical computer simulation method incorporating the matrix laboratory (MATLAB) and the transient systems simulation (TRNSYS) software was used for the comparative performance tests. The analysis results revealed that advanced thermal-environment comfort and stability can be provided by the AI-based algorithms. In particular, the FL and ANFIS algorithms were superior to the ANN algorithm in terms of providing better thermal conditions. The ANN-based algorithm, however, proved its potential to be the most energy-efficient and stable strategy among the four AI-based algorithms. It can be concluded that the optimal algorithm can be differently determined according to the major focus of the strategy. If comfortable thermal condition is the principal interest, then the FL or ANFIS algorithm could be the proper solution, and if energy saving for space heating and system operation stability is the main concerns, then the ANN-based algorithm may be applicable. - Highlights: • Integrated control algorithms were developed for the heating system and surface openings. • AI theories were applied to the control algorithms. • ANN, FL, and ANFIS were the applied AI theories. • Comparative performance tests were conducted using computer simulation. • AI algorithms presented superior temperature environment.

  6. Modelica Library for Building Heating, Ventilation and Air-Conditioning Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael

    2009-06-17

    This paper presents a freely available Modelica library for building heating, ventilation and air conditioning systems. The library is based on the Modelica.Fluid library. It has been developed to support research and development of integrated building energy and control systems. The primary applications are controls design, energy analysis and model-based operation. The library contains dynamic and steady-state component models that are applicable for analyzing fast transients when designing control algorithms and for conducting annual simulations when assessing energy performance. For most models, dimensional analysis is used to compute the performance for operating points that differ from nominal conditions. This allows parameterizing models in the absence of detailed geometrical information which is often impractical to obtain during the conceptual design phase of building systems. In the first part of this paper, the library architecture and the main classes are described. In the second part, an example is presented in which we implemented a model of a hydronic heating system with thermostatic radiator valves and thermal energy storage.

  7. Ebola expert says building up health systems is best defence | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2018-05-22

    May 22, 2018 ... Ebola expert says building up health systems is best defence ... community of public health experts to control viral epidemics in several countries. ... says the problem of infectious diseases has grown in the past 30 years, but ...

  8. Development of morphing algorithms for Histfactory using information geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bandyopadhyay, Anjishnu; Brock, Ian [University of Bonn (Germany); Cranmer, Kyle [New York University (United States)

    2016-07-01

    Many statistical analyses are based on likelihood fits. In any likelihood fit we try to incorporate all uncertainties, both systematic and statistical. We generally have distributions for the nominal and ±1 σ variations of a given uncertainty. Using that information, Histfactory morphs the distributions for any arbitrary value of the given uncertainties. In this talk, a new morphing algorithm will be presented, which is based on information geometry. The algorithm uses the information about the difference between various probability distributions. Subsequently, we map this information onto geometrical structures and develop the algorithm on the basis of different geometrical properties. Apart from varying all nuisance parameters together, this algorithm can also probe both small (< 1 σ) and large (> 2 σ) variations. It will also be shown how this algorithm can be used for interpolating other forms of probability distributions.

  9. The Areopagus Oval Building Reconsidered

    NARCIS (Netherlands)

    van den Eijnde, F.; Laughy, Michael H.

    2017-01-01

    The Areopagus Oval Building (AOB) provides a rare insight into Athenian architecture during the Geometric period, a time in which the archaeological record is mostly confined to graves. Dorothy Burr’s original 1933 publication of the building remains an exemplary and exhaustive presentation of the

  10. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    Science.gov (United States)

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  11. Scintiscanning of arthritis and analysis of build-up curves

    International Nuclear Information System (INIS)

    Yamagishi, Tsuneo; Omori, Shigeo; Miyawaki, Haruo; Maniwa, Masato; Yoshizaki, Kenichi

    1975-01-01

    In the present study 40 knee joints with rheumatoid arthritis, 23 knee joints with osteoarthrosis deformans, 3 knee joints with non-synovitis, one knee joint with pyogenic arthritis and 4 normal knee joints were scanned. By analysis of build-up curves obtained immediately after the intravenous injection of sup(99m)Tc-pertechnetate, the rate of accumulation of radioactivity (t 1/2) in the affected joints was simultaneously estimated in order to compare them with clinical findings. 1. Scintiscanning of arthritis, rheumatoid arthritis, osteoarthrosis deformans of the knee joint, non-specific synovitis, and pyogenic arthritis of the knee joint, yielded a positive scan for all of the joint diseases. 2. In the scintigram of healthy knee joints, there are no areas of RI accumulation or right to left difference. 3. In some instances abnormal uptake of RI was seen on scintigrams of arthritis even after normal clinical and laboratory findings had been achieved with therapy. 4. sup(99m)Tc-pertechnetate, a radionuclide with a short half-life, allows repeated scans and provides a useful radiologic means of evaluating therapeutic course and effectiveness. 5. Analysis of build-up curves revealed that the rate of accumulation of RI was faster in rheumatoid arthritis than in osteoarthrosis deformans. (auth.)

  12. An Empirical Model for Build-Up of Sodium and Calcium Ions in Small Scale Reverse Osmosis

    Directory of Open Access Journals (Sweden)

    Subriyer Nasir

    2011-05-01

    Full Text Available A simple models for predicting build-up of solute on membrane surface were formulated in this paper. The experiments were conducted with secondary effluent, groundwater and simulated feed water in small-scale of RO with capacity of 2000 L/d. Feed water used in the experiments contained varying concentrations of sodium, calcium, combined sodium and calcium. In order to study the effect of sodium and calcium ions on membrane performance, experiments with ground water and secondary effluent wastewater were also performed. Build-up of salts on the membrane surface was calculated by measuring concentrations of sodium and calcium ions in feed water permeate and reject streams using Atomic Absorption Spectrophotometer (AAS. Multiple linear regression of natural logarithmic transformation was used to develop the model based on four main parameters that affect the build-up of solute in a small scale of RO namely applied pressure, permeate flux, membrane resistance, and feed concentration. Experimental data obtained in a small scale RO unit were used to develop the empirical model. The predicted values of theoretical build-up of sodium and calcium on membrane surface were found in agreement with experimental data. The deviation in the prediction of build-up of sodium and calcium were found to be 1.4 to 10.47 % and 1.12 to 4.46%, respectively.

  13. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  14. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul

    2013-01-01

    We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...... scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies...... on the estimated visibilities between any two locations within the point cloud. With the segmentation into rooms at hand, we subsequently determine the locations and extents of doors between adjacent rooms. In our experiments, we demonstrate the feasibility of our method by applying it to synthetic as well...

  15. A One-Dimensional Particle-in-Cell Model of Plasma Build-Up in Vacuum Arcs

    CERN Document Server

    Timko, H; Kovermann, J; Taborelli, M; Nordlund, K; Descoeudres, A; Schneider, R; Calatroni, S; Matyash, K; Wuensch, W; Hansen, A; Grudiev, A

    2011-01-01

    Understanding the mechanism of plasma build-up in vacuum arcs is essential in many fields of physics. A one-dimensional particle-in-cell computer simulation model is presented, which models the plasma developing from a field emitter tip under electrical breakdown conditions, taking into account the relevant physical phenomena. As a starting point, only an external electric field and an initial enhancement factor of the tip are assumed. General requirements for plasma formation have been identified and formulated in terms of the initial local field and a critical neutral density. The dependence of plasma build-up on tip melting current, the evaporation rate of neutrals and external circuit time constant has been investigated for copper and simulations imply that arcing involves melting currents around 0.5-1 A/mu m(2),evaporation of neutrals to electron field emission ratios in the regime 0.01 - 0.05, plasma build-up timescales in the order of similar to 1 - 10 ns and two different regimes depending on initial ...

  16. ECMOR 4. 4th European conference on the mathematics of oil recovery. Topic A: Geometrical characterization. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    1994-01-01

    The report with collected proceedings from a conference, deals with mathematics of oil recovery with the focus on geometrical characterization. Topics of proceedings are as follow: Random functions and geological subsurfaces; modelling faults in reservoir simulation; building, managing, and history matching very large and complex grids with examples from the Gullfaks Field (Norway); optimal gridding of stochastic models for scale-up; combining Gaussian fields and fibre processes for modelling of sequence stratigraphic bounding surfaces. Five papers are prepared. 76 refs., 61 figs., 1 tab.

  17. Cooperated Bayesian algorithm for distributed scheduling problem

    Institute of Scientific and Technical Information of China (English)

    QIANG Lei; XIAO Tian-yuan

    2006-01-01

    This paper presents a new distributed Bayesian optimization algorithm (BOA) to overcome the efficiency problem when solving NP scheduling problems.The proposed approach integrates BOA into the co-evolutionary schema,which builds up a concurrent computing environment.A new search strategy is also introduced for local optimization process.It integrates the reinforcement learning(RL) mechanism into the BOA search processes,and then uses the mixed probability information from BOA (post-probability) and RL (pre-probability) to enhance the cooperation between different local controllers,which improves the optimization ability of the algorithm.The experiment shows that the new algorithm does better in both optimization (2.2%) and convergence (11.7%),compared with classic BOA.

  18. MRI simulation: end-to-end testing for prostate radiation therapy using geometric pelvic MRI phantoms

    International Nuclear Information System (INIS)

    Sun, Jidi; Menk, Fred; Lambert, Jonathan; Martin, Jarad; Denham, James W; Greer, Peter B; Dowling, Jason; Rivest-Henault, David; Pichler, Peter; Parker, Joel; Arm, Jameen; Best, Leah

    2015-01-01

    To clinically implement MRI simulation or MRI-alone treatment planning requires comprehensive end-to-end testing to ensure an accurate process. The purpose of this study was to design and build a geometric phantom simulating a human male pelvis that is suitable for both CT and MRI scanning and use it to test geometric and dosimetric aspects of MRI simulation including treatment planning and digitally reconstructed radiograph (DRR) generation.A liquid filled pelvic shaped phantom with simulated pelvic organs was scanned in a 3T MRI simulator with dedicated radiotherapy couch-top, laser bridge and pelvic coil mounts. A second phantom with the same external shape but with an internal distortion grid was used to quantify the distortion of the MR image. Both phantoms were also CT scanned as the gold-standard for both geometry and dosimetry. Deformable image registration was used to quantify the MR distortion. Dose comparison was made using a seven-field IMRT plan developed on the CT scan with the fluences copied to the MR image and recalculated using bulk electron densities.Without correction the maximum distortion of the MR compared with the CT scan was 7.5 mm across the pelvis, while this was reduced to 2.6 and 1.7 mm by the vendor’s 2D and 3D correction algorithms, respectively. Within the locations of the internal organs of interest, the distortion was <1.5 and <1 mm with 2D and 3D correction algorithms, respectively. The dose at the prostate isocentre calculated on CT and MRI images differed by 0.01% (1.1 cGy). Positioning shifts were within 1 mm when setup was performed using MRI generated DRRs compared to setup using CT DRRs.The MRI pelvic phantom allows end-to-end testing of the MRI simulation workflow with comparison to the gold-standard CT based process. MRI simulation was found to be geometrically accurate with organ dimensions, dose distributions and DRR based setup within acceptable limits compared to CT. (paper)

  19. Digital and discrete geometry theory and algorithms

    CERN Document Server

    Chen, Li

    2014-01-01

    This book provides comprehensive coverage of the modern methods for geometric problems in the computing sciences. It also covers concurrent topics in data sciences including geometric processing, manifold learning, Google search, cloud data, and R-tree for wireless networks and BigData.The author investigates digital geometry and its related constructive methods in discrete geometry, offering detailed methods and algorithms. The book is divided into five sections: basic geometry; digital curves, surfaces and manifolds; discretely represented objects; geometric computation and processing; and a

  20. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  1. Geometric phases for nonlinear coherent and squeezed states

    International Nuclear Information System (INIS)

    Yang Dabao; Chen Ying; Chen Jingling; Zhang Fulin

    2011-01-01

    The geometric phases for standard coherent states which are widely used in quantum optics have attracted considerable attention. Nevertheless, few physicists consider the counterparts of nonlinear coherent states, which are useful in the description of the motion of a trapped ion. In this paper, the non-unitary and non-cyclic geometric phases for two nonlinear coherent and one squeezed states are formulated, respectively. Moreover, some of their common properties are discussed, such as gauge invariance, non-locality and nonlinear effects. The nonlinear functions have dramatic impacts on the evolution of the corresponding geometric phases. They speed the evolution up or down. So this property may have an application in controlling or measuring geometric phase. For the squeezed case, when the squeezed parameter r → ∞, the limiting value of the geometric phase is also determined by a nonlinear function at a given time and angular velocity. In addition, the geometric phases for standard coherent and squeezed states are obtained under a particular condition. When the time evolution undergoes a period, their corresponding cyclic geometric phases are achieved as well. And the distinction between the geometric phases of the two coherent states may be regarded as a geometric criterion.

  2. SU-E-J-94: Geometric and Dosimetric Evaluation of Deformation Image Registration Algorithms Using Virtual Phantoms Generated From Patients with Lung Cancer

    International Nuclear Information System (INIS)

    Shen, Z; Greskovich, J; Xia, P; Bzdusek, K

    2015-01-01

    Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia

  3. Non-stoquastic Hamiltonians in quantum annealing via geometric phases

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel A.

    2017-09-01

    We argue that a complete description of quantum annealing implemented with continuous variables must take into account the non-adiabatic Aharonov-Anandan geometric phase that arises when the system Hamiltonian changes during the anneal. We show that this geometric effect leads to the appearance of non-stoquasticity in the effective quantum Ising Hamiltonians that are typically used to describe quantum annealing with flux qubits. We explicitly demonstrate the effect of this geometric non-stoquasticity when quantum annealing is performed with a system of one and two coupled flux qubits. The realization of non-stoquastic Hamiltonians has important implications from a computational complexity perspective, since it is believed that in many cases quantum annealing with stoquastic Hamiltonians can be efficiently simulated via classical algorithms such as Quantum Monte Carlo. It is well known that the direct implementation of non-stoquastic Hamiltonians with flux qubits is particularly challenging. Our results suggest an alternative path for the implementation of non-stoquasticity via geometric phases that can be exploited for computational purposes.

  4. Full-waveform data for building roof step edge localization

    Science.gov (United States)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  5. A method of the sensitivity analysis of build-up and decay of actinides

    International Nuclear Information System (INIS)

    Mitani, Hiroshi; Koyama, Kinji; Kuroi, Hideo

    1977-07-01

    To make sensitivity analysis of build-up and decay of actinides, mathematical methods related to this problem have been investigated in detail. Application of time-dependent perturbation technique and Bateman method to sensitivity analysis is mainly studied. For the purpose, a basic equation and its adjoint equation for build-up and decay of actinides are systematically solved by introducing Laplace and modified Laplace transforms and their convolution theorems. Then, the mathematical method of sensitivity analyses is formulated by the above technique; its physical significance is also discussed. Finally, application of eigenvalue-method is investigated. Sensitivity coefficients can be directly calculated by this method. (auth.)

  6. Online measurement for geometrical parameters of wheel set based on structure light and CUDA parallel processing

    Science.gov (United States)

    Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie

    2018-01-01

    The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.

  7. Geometric multipartite entanglement measures

    International Nuclear Information System (INIS)

    Paz-Silva, Gerardo A.; Reina, John H.

    2007-01-01

    Within the framework of constructions for quantifying entanglement, we build a natural scenario for the assembly of multipartite entanglement measures based on Hopf bundle-like mappings obtained through Clifford algebra representations. Then, given the non-factorizability of an arbitrary two-qubit density matrix, we give an alternate quantity that allows the construction of two types of entanglement measures based on their arithmetical and geometrical averages over all pairs of qubits in a register of size N, and thus fully characterize its degree and type of entanglement. We find that such an arithmetical average is both additive and strongly super additive

  8. Multiobjective optimization design of green building envelope material using a non-dominated sorting genetic algorithm

    International Nuclear Information System (INIS)

    Yang, Ming-Der; Lin, Min-Der; Lin, Yu-Hao; Tsai, Kang-Ting

    2017-01-01

    Highlights: • An effective envelope energy performance model (BEM) was developed. • We integrated NSGA-II with the BEM to optimize the green building envelope. • A tradeoff plan of green building design for three conflict objectives was obtained. • The optimal envelope design efficiently reduced the construction cost of green building. - Abstract: To realize the goal of environmental sustainability, improving energy efficiency in buildings is a major priority worldwide. However, the practical design of green building envelopes for energy conservation is a highly complex optimization problem, and architects must make multiobjective decisions. In practice, methods such as multicriteria analyses that entail capitalizing on possibly many (but in nearly any case limited) alternatives are commonly employed. This study investigated the feasibility of applying a multiobjective optimal model on building envelope design (MOPBEM), which involved integrating a building envelope energy performance model with a multiobjective optimizer. The MOPBEM was established to provide a reference for green designs. A nondominated sorting genetic algorithm-II (NSGA-II) was used to achieve a tradeoff design set between three conflicting objectives, namely minimizing the envelope construction cost (ENVCOST), minimizing the envelope energy performance (ENVLOAD), and maximizing the window opening rate (WOPR). A real office building case was designed using the MOPBEM to identify the potential strengths and weaknesses of the proposed MOPBEM. The results showed that a high ENVCOST was expended in simultaneously satisfying the low ENVLOAD and high WOPR. Various designs exhibited obvious cost reductions compared with the original architects' manual design, demonstrating the practicability of the MOPBEM.

  9. Algorithms for Planar Graphs and Graphs in Metric Spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    structural properties that can be exploited. For instance, a road network or a wire layout on a microchip is typically (near-)planar and distances in the network are often defined w.r.t. the Euclidean or the rectilinear metric. Specialized algorithms that take advantage of such properties are often orders...... of magnitude faster than the corresponding algorithms for general graphs. The first and main part of this thesis focuses on the development of efficient planar graph algorithms. The most important contributions include a faster single-source shortest path algorithm, a distance oracle with subquadratic...... for geometric graphs and graphs embedded in metric spaces. Roughly speaking, the stretch factor is a real value expressing how well a (geo-)metric graph approximates the underlying complete graph w.r.t. distances. We give improved algorithms for computing the stretch factor of a given graph and for augmenting...

  10. Development changes of geometric layout product, developed by means of computer aided design

    Directory of Open Access Journals (Sweden)

    С.Г. Кєворков

    2007-01-01

    Full Text Available  Contains results of development of modification formation methodology in a product geometrical mockup made by means of CAD system. Change process of a CAD data (assembly structures, details and influencing on a product structure is considered. The analysis of the assembly version creations algorithm, which creates a product structure with certain serial number, is carried out. The algorithms of CAD user environment creations, restriction of CAD object and CAD object cancellation algorithm are created.

  11. Geometric properties of Banach spaces and nonlinear iterations

    CERN Document Server

    Chidume, Charles

    2009-01-01

    Nonlinear functional analysis and applications is an area of study that has provided fascination for many mathematicians across the world. This monograph delves specifically into the topic of the geometric properties of Banach spaces and nonlinear iterations, a subject of extensive research over the past thirty years. Chapters 1 to 5 develop materials on convexity and smoothness of Banach spaces, associated moduli and connections with duality maps. Key results obtained are summarized at the end of each chapter for easy reference. Chapters 6 to 23 deal with an in-depth, comprehensive and up-to-date coverage of the main ideas, concepts and results on iterative algorithms for the approximation of fixed points of nonlinear nonexpansive and pseudo-contractive-type mappings. This includes detailed workings on solutions of variational inequality problems, solutions of Hammerstein integral equations, and common fixed points (and common zeros) of families of nonlinear mappings. Carefully referenced and full of recent,...

  12. A Brightness-Referenced Star Identification Algorithm for APS Star Trackers

    Science.gov (United States)

    Zhang, Peng; Zhao, Qile; Liu, Jingnan; Liu, Ning

    2014-01-01

    Star trackers are currently the most accurate spacecraft attitude sensors. As a result, they are widely used in remote sensing satellites. Since traditional charge-coupled device (CCD)-based star trackers have a limited sensitivity range and dynamic range, the matching process for a star tracker is typically not very sensitive to star brightness. For active pixel sensor (APS) star trackers, the intensity of an imaged star is valuable information that can be used in star identification process. In this paper an improved brightness referenced star identification algorithm is presented. This algorithm utilizes the k-vector search theory and adds imaged stars' intensities to narrow the search scope and therefore increase the efficiency of the matching process. Based on different imaging conditions (slew, bright bodies, etc.) the developed matching algorithm operates in one of two identification modes: a three-star mode, and a four-star mode. If the reference bright stars (the stars brighter than three magnitude) show up, the algorithm runs the three-star mode and efficiency is further improved. The proposed method was compared with other two distinctive methods the pyramid and geometric voting methods. All three methods were tested with simulation data and actual in orbit data from the APS star tracker of ZY-3. Using a catalog composed of 1500 stars, the results show that without false stars the efficiency of this new method is 4∼5 times that of the pyramid method and 35∼37 times that of the geometric method. PMID:25299950

  13. A scalable geometric multigrid solver for nonsymmetric elliptic systems with application to variable-density flows

    Science.gov (United States)

    Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.

    2018-03-01

    A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.

  14. Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz

    Science.gov (United States)

    Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao

    2018-05-01

    In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.

  15. Spatial Precision in Magnetic Resonance Imaging–Guided Radiation Therapy: The Role of Geometric Distortion

    Energy Technology Data Exchange (ETDEWEB)

    Weygand, Joseph, E-mail: jw2899@columbia.edu [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Fuller, Clifton David [The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Ibbott, Geoffrey S. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Mohamed, Abdallah S.R. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Department of Clinical Oncology and Nuclear Medicine, Alexandria University, Alexandria (Egypt); Ding, Yao [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Yang, Jinzhong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States); Hwang, Ken-Pin [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Wang, Jihong [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, Texas (United States)

    2016-07-15

    Because magnetic resonance imaging–guided radiation therapy (MRIgRT) offers exquisite soft tissue contrast and the ability to image tissues in arbitrary planes, the interest in this technology has increased dramatically in recent years. However, intrinsic geometric distortion stemming from both the system hardware and the magnetic properties of the patient affects MR images and compromises the spatial integrity of MRI-based radiation treatment planning, given that for real-time MRIgRT, precision within 2 mm is desired. In this article, we discuss the causes of geometric distortion, describe some well-known distortion correction algorithms, and review geometric distortion measurements from 12 studies, while taking into account relevant imaging parameters. Eleven of the studies reported phantom measurements quantifying system-dependent geometric distortion, while 2 studies reported simulation data quantifying magnetic susceptibility–induced geometric distortion. Of the 11 studies investigating system-dependent geometric distortion, 5 reported maximum measurements less than 2 mm. The simulation studies demonstrated that magnetic susceptibility–induced distortion is typically smaller than system-dependent distortion but still nonnegligible, with maximum distortion ranging from 2.1 to 2.6 mm at a field strength of 1.5 T. As expected, anatomic landmarks containing interfaces between air and soft tissue had the largest distortions. The evidence indicates that geometric distortion reduces the spatial integrity of MRI-based radiation treatment planning and likely diminishes the efficacy of MRIgRT. Better phantom measurement techniques and more effective distortion correction algorithms are needed to achieve the desired spatial precision.

  16. Spatial Precision in Magnetic Resonance Imaging–Guided Radiation Therapy: The Role of Geometric Distortion

    International Nuclear Information System (INIS)

    Weygand, Joseph; Fuller, Clifton David; Ibbott, Geoffrey S.; Mohamed, Abdallah S.R.; Ding, Yao; Yang, Jinzhong; Hwang, Ken-Pin; Wang, Jihong

    2016-01-01

    Because magnetic resonance imaging–guided radiation therapy (MRIgRT) offers exquisite soft tissue contrast and the ability to image tissues in arbitrary planes, the interest in this technology has increased dramatically in recent years. However, intrinsic geometric distortion stemming from both the system hardware and the magnetic properties of the patient affects MR images and compromises the spatial integrity of MRI-based radiation treatment planning, given that for real-time MRIgRT, precision within 2 mm is desired. In this article, we discuss the causes of geometric distortion, describe some well-known distortion correction algorithms, and review geometric distortion measurements from 12 studies, while taking into account relevant imaging parameters. Eleven of the studies reported phantom measurements quantifying system-dependent geometric distortion, while 2 studies reported simulation data quantifying magnetic susceptibility–induced geometric distortion. Of the 11 studies investigating system-dependent geometric distortion, 5 reported maximum measurements less than 2 mm. The simulation studies demonstrated that magnetic susceptibility–induced distortion is typically smaller than system-dependent distortion but still nonnegligible, with maximum distortion ranging from 2.1 to 2.6 mm at a field strength of 1.5 T. As expected, anatomic landmarks containing interfaces between air and soft tissue had the largest distortions. The evidence indicates that geometric distortion reduces the spatial integrity of MRI-based radiation treatment planning and likely diminishes the efficacy of MRIgRT. Better phantom measurement techniques and more effective distortion correction algorithms are needed to achieve the desired spatial precision.

  17. Chitosan microspheres with an extracellular matrix-mimicking nanofibrous structure as cell-carrier building blocks for bottom-up cartilage tissue engineering

    Science.gov (United States)

    Zhou, Yong; Gao, Huai-Ling; Shen, Li-Li; Pan, Zhao; Mao, Li-Bo; Wu, Tao; He, Jia-Cai; Zou, Duo-Hong; Zhang, Zhi-Yuan; Yu, Shu-Hong

    2015-12-01

    Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation. Recently, as a valuable alternative, a bottom-up TE approach utilizing cell-loaded micrometer-scale modular components as building blocks to reconstruct a new tissue in vitro or in vivo has been proved to demonstrate a number of desirable advantages compared with the traditional bulk scaffold based top-down TE approach. Nevertheless, micro-components with an ECM-mimicking nanofibrous structure are still very scarce and highly desirable. Chitosan (CS), an accessible natural polymer, has demonstrated appealing intrinsic properties and promising application potential for TE, especially the cartilage tissue regeneration. According to this background, we report here the fabrication of chitosan microspheres with an ECM-mimicking nanofibrous structure for the first time based on a physical gelation process. By combining this physical fabrication procedure with microfluidic technology, uniform CS microspheres (CMS) with controlled nanofibrous microstructure and tunable sizes can be facilely obtained. Especially, no potentially toxic or denaturizing chemical crosslinking agent was introduced into the products. Notably, in vitro chondrocyte culture tests revealed that enhanced cell attachment and proliferation were realized, and a macroscopic 3D geometrically shaped cartilage-like composite can be easily constructed with the nanofibrous CMS (NCMS) and chondrocytes, which demonstrate significant application potential of NCMS as the bottom-up cell-carrier components for cartilage tissue engineering.Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation

  18. The effect of plasma minor-radius expansion in the current build-up phase of a large tokamak

    International Nuclear Information System (INIS)

    Kobayashi, Tomofumi; Tazima, Teruhiko; Tani, Keiji; Tamura, Sanae

    1977-03-01

    A plasma simulation code has been developed to study the plasma current build-up process in JT-60. Plasma simulation is made with a model which represents well overall plasma behavior of the present-day tokamaks. The external electric circuit is taken into consideration in simulation calculation. An emphasis is placed on the simulation of minor-radius expansion of the plasma and behavior of neutral particles in the plasma during current build-up. A calculation with typical parameters of JT-60 shows a week skin distribution in the current density and the electron temperature, if the minor radius of the plasma expands with build-up of the plasma current. (auth.)

  19. Does the QCD vacuum build up a colour chemical potential dynamically?

    International Nuclear Information System (INIS)

    Sailer, K.; Greiner, W.

    1998-01-01

    The one-loop effective theory is found for QCD assuming an overcritical homogeneous gluon vector potential background that corresponds to a non-vanishing colour chemical potential. It is found that the vacuum is unstable against building up a non-vanishing colour chemical potential for sufficiently large number of flavours. (author)

  20. Build-up Factor Calculation for Ordinary Concrete, Baryte Concrete and Blast-furnace Slugges Concrete as γ Radiation Shielding

    International Nuclear Information System (INIS)

    Isman MT; Elisabeth Supriatni; Tochrul Binowo

    2002-01-01

    Calculation of build up factor ordinary concrete, baryte concrete and blast-furnace sludge concrete have been carried out. The calculations have been carried out by dose rate measurement of Cs 137 source before and after passing through shielding. The investigated variables were concrete type, thickness of concrete and relative possession of concrete. Concrete type variables are ordinary concrete, baryte concrete and blast sludge furnace concrete. The thickness variables were 6, 12, 18, 24, 30 and 36 cm. The relative position variables were dose to the source and close to detector. The result showed that concrete type and position did not have significant effect to build-up factor value, while the concrete thickness (r) and the attenuation coefficient (μ) were influenced to the build-up factor. The higher μr value the higher build-up factor value. (author)

  1. Non-convex polygons clustering algorithm

    Directory of Open Access Journals (Sweden)

    Kruglikov Alexey

    2016-01-01

    Full Text Available A clustering algorithm is proposed, to be used as a preliminary step in motion planning. It is tightly coupled to the applied problem statement, i.e. uses parameters meaningful only with respect to it. Use of geometrical properties for polygons clustering allows for a better calculation time as opposed to general-purpose algorithms. A special form of map optimized for quick motion planning is constructed as a result.

  2. A practical guide to experimental geometrical optics

    CERN Document Server

    Garbovskiy, Yuriy A

    2017-01-01

    A concise, yet deep introduction to experimental, geometrical optics, this book begins with fundamental concepts and then develops the practical skills and research techniques routinely used in modern laboratories. Suitable for students, researchers and optical engineers, this accessible text teaches readers how to build their own optical laboratory and to design and perform optical experiments. It uses a hands-on approach which fills a gap between theory-based textbooks and laboratory manuals, allowing the reader to develop their practical skills in this interdisciplinary field, and also explores the ways in which this knowledge can be applied to the design and production of commercial optical devices. Including supplementary online resources to help readers track and evaluate their experimental results, this text is the ideal companion for anyone with a practical interest in experimental geometrical optics.

  3. Parallel optimization algorithm for drone inspection in the building industry

    Science.gov (United States)

    Walczyński, Maciej; BoŻejko, Wojciech; Skorupka, Dariusz

    2017-07-01

    In this paper we present an approach for Vehicle Routing Problem with Drones (VRPD) in case of building inspection from the air. In autonomic inspection process there is a need to determine of the optimal route for inspection drone. This is especially important issue because of the very limited flight time of modern multicopters. The method of determining solutions for Traveling Salesman Problem(TSP), described in this paper bases on Parallel Evolutionary Algorithm (ParEA)with cooperative and independent approach for communication between threads. This method described first by Bożejko and Wodecki [1] bases on the observation that if exists some number of elements on certain positions in a number of permutations which are local minima, then those elements will be in the same position in the optimal solution for TSP problem. Numerical experiments were made on BEM computational cluster with using MPI library.

  4. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  5. Impact of eccentricity build-up and graveyard disposal Strategies on MEO navigation constellations

    Science.gov (United States)

    Radtke, Jonas; Domínguez-González, Raúl; Flegel, Sven K.; Sánchez-Ortiz, Noelia; Merz, Klaus

    2015-12-01

    With currently two constellations being in or close to the build-up phase, in a few years the Medium Earth Orbit (MEO) region will be populated with four complete navigation systems in relatively close orbital altitudes: The American GPS, Russian GLONASS, European Galileo, and Chinese BeiDou. To guarantee an appropriate visibility of constellation satellites from Earth, these constellations rely on certain defined orbits. For this, both the repeat pattern, which is basically defined by the semimajor axis and inclination, as well as the orbital planes, which are defined by the right ascension of ascending node, are determining values. To avoid an overcrowding of the region of interest, the disposal of satellites after their end-of-life is recommended. However, for the MEO region, no internationally agreed mitigation guidelines exist. Because of their distances to Earth, ordinary disposal manoeuvres leading to a direct or delayed re-entry due to atmospheric drag are not feasible: The needed fuel masses for such manoeuvres are by far above the reasonable limits and available fuel budgets. Thus, additional approaches have to be applied. For this, in general two options exist: disposal to graveyard orbits or the disposal to eccentricity build-up orbits. In the study performed, the key criterion for the graveyard strategy is that the disposed spacecraft must keep a safe minimum distance to the altitude of the active constellation on a long-term time scale of up to 200 years. This constraint imposes stringent requirements on the stability of the graveyard orbit. Similar disposals are also performed for high LEO satellites and disposed GEO payloads. The eccentricity build-up strategy on the other hand uses resonant effects between the Earth's geopotential, the Sun and the Moon. Depending on the initial conditions, these can cause a large eccentricity build-up, which finally can lead to a re-entry of the satellite. In this paper, the effects of applying either the first or

  6. Geometric Nonlinear Computation of Thin Rods and Shells

    Science.gov (United States)

    Grinspun, Eitan

    2011-03-01

    We develop simple, fast numerical codes for the dynamics of thin elastic rods and shells, by exploiting the connection between physics, geometry, and computation. By building a discrete mechanical picture from the ground up, mimicking the axioms, structures, and symmetries of the smooth setting, we produce numerical codes that not only are consistent in a classical sense, but also reproduce qualitative, characteristic behavior of a physical system----such as exact preservation of conservation laws----even for very coarse discretizations. As two recent examples, we present discrete computational models of elastic rods and shells, with straightforward extensions to the viscous setting. Even at coarse discretizations, the resulting simulations capture characteristic geometric instabilities. The numerical codes we describe are used in experimental mechanics, cinema, and consumer software products. This is joint work with Miklós Bergou, Basile Audoly, Max Wardetzky, and Etienne Vouga. This research is supported in part by the Sloan Foundation, the NSF, Adobe, Autodesk, Intel, the Walt Disney Company, and Weta Digital.

  7. Electron-Cloud Build-Up: Theory and Data

    International Nuclear Information System (INIS)

    Furman, M.A.

    2010-01-01

    We present a broad-brush survey of the phenomenology, history and importance of the electron-cloud effect (ECE). We briefly discuss the simulation techniques used to quantify the electron-cloud (EC) dynamics. Finally, we present in more detail an effective theory to describe the EC density build-up in terms of a few effective parameters. For further details, the reader is encouraged to refer to the proceedings of many prior workshops, either dedicated to EC or with significant EC contents, including the entire 'ECLOUD' series. In addition, the proceedings of the various flavors of Particle Accelerator Conferences contain a large number of EC-related publications. The ICFA Beam Dynamics Newsletter series contains one dedicated issue, and several occasional articles, on EC. An extensive reference database is the LHC website on EC.

  8. a Tool for Crowdsourced Building Information Modeling Through Low-Cost Range Camera: Preliminary Demonstration and Potential

    Science.gov (United States)

    Capocchiano, F.; Ravanelli, R.; Crespi, M.

    2017-11-01

    Within the construction sector, Building Information Models (BIMs) are more and more used thanks to the several benefits that they offer in the design of new buildings and the management of the existing ones. Frequently, however, BIMs are not available for already built constructions, but, at the same time, the range camera technology provides nowadays a cheap, intuitive and effective tool for automatically collecting the 3D geometry of indoor environments. It is thus essential to find new strategies, able to perform the first step of the scan to BIM process, by extracting the geometrical information contained in the 3D models that are so easily collected through the range cameras. In this work, a new algorithm to extract planimetries from the 3D models of rooms acquired by means of a range camera is therefore presented. The algorithm was tested on two rooms, characterized by different shapes and dimensions, whose 3D models were captured with the Occipital Structure SensorTM. The preliminary results are promising: the developed algorithm is able to model effectively the 2D shape of the investigated rooms, with an accuracy level comprised in the range of 5 - 10 cm. It can be potentially used by non-expert users in the first step of the BIM generation, when the building geometry is reconstructed, for collecting crowdsourced indoor information in the frame of BIMs Volunteered Geographic Information (VGI) generation.

  9. Geometrical-optics approximation of forward scattering by coated particles.

    Science.gov (United States)

    Xu, Feng; Cai, Xiaoshu; Ren, Kuanfang

    2004-03-20

    By means of geometrical optics we present an approximation algorithm with which to accelerate the computation of scattering intensity distribution within a forward angular range (0 degrees-60 degrees) for coated particles illuminated by a collimated incident beam. Phases of emerging rays are exactly calculated to improve the approximation precision. This method proves effective for transparent and tiny absorbent particles with size parameters larger than 75 but fails to give good approximation results at scattering angles at which refractive rays are absent. When the absorption coefficient of a particle is greater than 0.01, the geometrical optics approximation is effective only for forward small angles, typically less than 10 degrees or so.

  10. Creating a three level building classification using topographic and address-based data for Manchester

    Science.gov (United States)

    Hussain, M.; Chen, D.

    2014-11-01

    Buildings, the basic unit of an urban landscape, host most of its socio-economic activities and play an important role in the creation of urban land-use patterns. The spatial arrangement of different building types creates varied urban land-use clusters which can provide an insight to understand the relationships between social, economic, and living spaces. The classification of such urban clusters can help in policy-making and resource management. In many countries including the UK no national-level cadastral database containing information on individual building types exists in public domain. In this paper, we present a framework for inferring functional types of buildings based on the analysis of their form (e.g. geometrical properties, such as area and perimeter, layout) and spatial relationship from large topographic and address-based GIS database. Machine learning algorithms along with exploratory spatial analysis techniques are used to create the classification rules. The classification is extended to two further levels based on the functions (use) of buildings derived from address-based data. The developed methodology was applied to the Manchester metropolitan area using the Ordnance Survey's MasterMap®, a large-scale topographic and address-based data available for the UK.

  11. The Power of Flexibility: Autonomous Agents That Conserve Energy in Commercial Buildings

    Science.gov (United States)

    Kwak, Jun-young

    Agent-based systems for energy conservation are now a growing area of research in multiagent systems, with applications ranging from energy management and control on the smart grid, to energy conservation in residential buildings, to energy generation and dynamic negotiations in distributed rural communities. Contributing to this area, my thesis presents new agent-based models and algorithms aiming to conserve energy in commercial buildings. More specifically, my thesis provides three sets of algorithmic contributions. First, I provide online predictive scheduling algorithms to handle massive numbers of meeting/event scheduling requests considering flexibility , which is a novel concept for capturing generic user constraints while optimizing the desired objective. Second, I present a novel BM-MDP ( Bounded-parameter Multi-objective Markov Decision Problem) model and robust algorithms for multi-objective optimization under uncertainty both at the planning and execution time. The BM-MDP model and its robust algorithms are useful in (re)scheduling events to achieve energy efficiency in the presence of uncertainty over user's preferences. Third, when multiple users contribute to energy savings, fair division of credit for such savings to incentivize users for their energy saving activities arises as an important question. I appeal to cooperative game theory and specifically to the concept of Shapley value for this fair division. Unfortunately, scaling up this Shapley value computation is a major hindrance in practice. Therefore, I present novel approximation algorithms to efficiently compute the Shapley value based on sampling and partitions and to speed up the characteristic function computation. These new models have not only advanced the state of the art in multiagent algorithms, but have actually been successfully integrated within agents dedicated to energy efficiency: SAVES, TESLA and THINC. SAVES focuses on the day-to-day energy consumption of individuals and

  12. Geometric modeling in the problem of ball bearing accuracy

    Science.gov (United States)

    Glukhov, V. I.; Pushkarev, V. V.; Khomchenko, V. G.

    2017-06-01

    The manufacturing quality of ball bearings is an urgent problem for machine-building industry. The aim of the research is to improve the geometric specifications accuracy of bearings based on evidence-based systematic approach and the method of adequate size, location and form deviations modeling of the rings and assembled ball bearings. The present work addressed the problem of bearing geometric specifications identification and the study of these specifications. The deviation from symmetric planar of rings and bearings assembly and mounting width are among these specifications. A systematic approach to geometric specifications values and ball bearings tolerances normalization in coordinate systems will improve the quality of bearings by optimizing and minimizing the number of specifications. The introduction of systematic approach to the international standards on rolling bearings is a guarantee of a significant increase in accuracy of bearings and the quality of products where they are applied.

  13. Geometric Modeling and Reasoning of Human-Centered Freeform Products

    CERN Document Server

    Wang, Charlie C L

    2013-01-01

    The recent trend in user-customized product design requires the shape of products to be automatically adjusted according to the human body’s shape, so that people will feel more comfortable when wearing these products.  Geometric approaches can be used to design the freeform shape of products worn by people, which can greatly improve the efficiency of design processes in various industries involving customized products (e.g., garment design, toy design, jewel design, shoe design, and design of medical devices, etc.). These products are usually composed of very complex geometric shapes (represented by free-form surfaces), and are not driven by a parameter table but a digital human model with free-form shapes or part of human bodies (e.g., wrist, foot, and head models).   Geometric Modeling and Reasoning of Human-Centered Freeform Products introduces the algorithms of human body reconstruction, freeform product modeling, constraining and reconstructing freeform products, and shape optimization for improving...

  14. The Build-Up Course of Visuo-Motor and Audio-Motor Temporal Recalibration

    Directory of Open Access Journals (Sweden)

    Yoshimori Sugano

    2011-10-01

    Full Text Available The sensorimotor timing is recalibrated after a brief exposure to a delayed feedback of voluntary actions (temporal recalibration effect: TRE (Heron et al., 2009; Stetson et al., 2006; Sugano et al., 2010. We introduce a new paradigm, namely ‘synchronous tapping’ (ST which allows us to investigate how the TRE builds up during adaptation. In each experimental trial, participants were repeatedly exposed to a constant lag (∼150 ms between their voluntary action (pressing a mouse and a feedback stimulus (a visual flash / an auditory click 10 times. Immediately after that, they performed a ST task with the same stimulus as a pace signal (7 flashes / clicks. A subjective ‘no-delay condition’ (∼50 ms served as control. The TRE manifested itself as a change in the tap-stimulus asynchrony that compensated the exposed lag (eg, after lag adaptation, the tap preceded the stimulus more than in control and built up quickly (∼3–6 trials, ∼23–45 sec in both the visuo- and audio-motor domain. The audio-motor TRE was bigger and built-up faster than the visuo-motor one. To conclude, the TRE is comparable between visuo- and audio-motor domain, though they are slightly different in size and build-up rate.

  15. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  16. A novel algorithm for fast grasping of unknown objects using C-shape configuration

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Meijer, Jonathan; Wisse, Martijn

    2018-02-01

    Increasing grasping efficiency is very important for the robots to grasp unknown objects especially subjected to unfamiliar environments. To achieve this, a new algorithm is proposed based on the C-shape configuration. Specifically, the geometric model of the used under-actuated gripper is approximated as a C-shape. To obtain an appropriate graspable position, this C-shape configuration is applied to fit geometric model of an unknown object. The geometric model of unknown object is constructed by using a single-view partial point cloud. To examine the algorithm using simulations, a comparison of the commonly used motion planners is made. The motion planner with the highest number of solved runs, lowest computing time and the shortest path length is chosen to execute grasps found by this grasping algorithm. The simulation results demonstrate that excellent grasping efficiency is achieved by adopting our algorithm. To validate this algorithm, experiment tests are carried out using a UR5 robot arm and an under-actuated gripper. The experimental results show that steady grasping actions are obtained. Hence, this research provides a novel algorithm for fast grasping of unknown objects.

  17. An efficient genetic algorithm for the design optimization of cold-formed steel portal frame buildings

    OpenAIRE

    Phan, Thanh Duoc; Lim, James; Tanyimboh, Tiku T.; Sha, Wei

    2013-01-01

    The design optimization of a cold-formed steel portal frame building is considered in this paper. The proposed genetic algorithm (GA) optimizer considers both topology (i.e., frame spacing and pitch) and cross-sectional sizes of the main structural members as the decision variables. Previous GAs in the literature were characterized by poor convergence, including slow progress, that usually results in excessive computation times and/or frequent failure to achieve an optimal or near-optimal sol...

  18. Geometric computations with interval and new robust methods applications in computer graphics, GIS and computational geometry

    CERN Document Server

    Ratschek, H

    2003-01-01

    This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th

  19. Project description: Foundations and tools for building well-behaved systems

    DEFF Research Database (Denmark)

    Katajainen, Jyrki; Elmasry, Amr; Elverkilde, Jon Daniel

    We aim at doing basic research on the theoretical foundations on how to build reliable, safe, and fast software systems, and developing tools that make the construction and maintenance of such systems easier. The theoretical questions taken up are related to algorithms, exception safety, and memory...

  20. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    Science.gov (United States)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  1. A GEOMETRICAL HEIGHT SCALE FOR SUNSPOT PENUMBRAE

    International Nuclear Information System (INIS)

    Puschmann, K. G.; Ruiz Cobo, B.; MartInez Pillet, V.

    2010-01-01

    Inversions of spectropolarimetric observations of penumbral filaments deliver the stratification of different physical quantities in an optical depth scale. However, without establishing a geometrical height scale, their three-dimensional geometrical structure cannot be derived. This is crucial in understanding the correct spatial variation of physical properties in the penumbral atmosphere and to provide insights into the mechanism capable of explaining the observed penumbral brightness. The aim of this work is to determine a global geometrical height scale in the penumbra by minimizing the divergence of the magnetic field vector and the deviations from static equilibrium as imposed by a force balance equation that includes pressure gradients, gravity, and the Lorentz force. Optical depth models are derived from the inversion of spectropolarimetric data of an active region observed with the Solar Optical Telescope on board the Hinode satellite. We use a genetic algorithm to determine the boundary condition for the inference of geometrical heights. The retrieved geometrical height scale permits the evaluation of the Wilson depression at each pixel and the correlation of physical quantities at each height. Our results fit into the uncombed penumbral scenario, i.e., a penumbra composed of flux tubes with channeled mass flow and with a weaker and more horizontal magnetic field as compared with the background field. The ascending material is hotter and denser than their surroundings. We do not find evidence of overturning convection or field-free regions in the inner penumbral area analyzed. The penumbral brightness can be explained by the energy transfer of the ascending mass carried by the Evershed flow, if the physical quantities below z = -75 km are extrapolated from the results of the inversion.

  2. Need for generic, innovative and geometric deliveries in developing self-sustaining capacity building in remote sensing

    Science.gov (United States)

    Gupta, R. K.; Balamanikavelu, P. M.; Vijayan, D.; Prasad, T. S.

    Everybody uses a bulb to illustrate an idea but nobody shows where the current comes from. Majority of remote sensing user community comes from natural and social sciences domain while remote sensing technology evolves from physical and engineering sciences. To ensure inculcation and internalization of remote sensing technology by application/resource scientists, trainer needs to transfer physical and engineering concepts in geometric manner. Here, the steering for the transfer of knowledge (facts, procedures, concepts and principles) and skills (thinking, acting, reacting and interacting) needs to take the trainees from Known to Unknown, Concrete to Abstract, Observation to Theory and Simple to Complex. In the initial stage of training/education, experiential learning by instructor led exploring of thematic details in false colour composite (FCC) as well as in individual black and white spectral band(s) imagery by trainees not only creates interest, confidence build-up and orientation towards purposeful learning but also helps them to overcome their inhibitions towards the physical and engineering basal. The methodology to be adopted has to inculcate productive learning, emphasizing more on thinking and trial and error aspects as opposed to reproductive learning based dominantly on being told and imitation. The delivery by trainer needs to ensure dynamic, stimulating and effective discussions through deluging questions pertaining to analysis, synthesis and evaluation nature. This would ensure proactive participation from trainees. Hands-on module leads to creative concretization of concepts. To keep the trainees inspired to learn in an auto mode during post-training period, they need to consciously swim in the current and emerging knowledge pool during training programme. This is achieved through assignment of seminar delivery task to the trainees. During the delivery of seminar, peers and co-trainees drive the trainee to communicate the seminar content not only

  3. Aesthetic Closure of Maxillary and Mandibular Anterior Spaces Using Direct Composite Resin Build-Ups: A Case Report

    Directory of Open Access Journals (Sweden)

    Schick Simona-Georgiana

    2016-07-01

    Full Text Available The presence of multiple spaces in the anterior aesthetic zone can produce discomfort for patients and its treatment can be difficult for dental professionals. A variety of treatment options are available and these include orthodontic movement, prosthetic indirect restorations or direct composite resin build-ups. Among these, the closure of interdental spaces using composite build-ups combined with orthodontic treatment is considered to be most conservative. This type of treatment has several advantages like the maximum preservation of tooth substance (no tooth preparation, no need for anesthesia, no multiple time-consuming visits, no provisional restorations and also comparably low costs. Clinical Consideration: This case report describes the clinical restorative procedure of direct composite resin build-ups for the closure of multiple anterior spaces.

  4. Geometric description of images as topographic maps

    CERN Document Server

    Caselles, Vicent

    2010-01-01

    This volume discusses the basic geometric contents of an image and presents a tree data structure to handle those contents efficiently. The nodes of the tree are derived from connected components of level sets of the intensity, while the edges represent inclusion information. Grain filters, morphological operators simplifying these geometric contents, are analyzed and several applications to image comparison and registration, and to edge and corner detection, are presented. The mathematically inclined reader may be most interested in Chapters 2 to 6, which generalize the topological Morse description to continuous or semicontinuous functions, while mathematical morphologists may more closely consider grain filters in Chapter 3. Computer scientists will find algorithmic considerations in Chapters 6 and 7, the full justification of which may be found in Chapters 2 and 4 respectively. Lastly, all readers can learn more about the motivation for this work in the image processing applications presented in Chapter 8...

  5. Geometric group theory

    CERN Document Server

    Bestvina, Mladen; Vogtmann, Karen

    2014-01-01

    Geometric group theory refers to the study of discrete groups using tools from topology, geometry, dynamics and analysis. The field is evolving very rapidly and the present volume provides an introduction to and overview of various topics which have played critical roles in this evolution. The book contains lecture notes from courses given at the Park City Math Institute on Geometric Group Theory. The institute consists of a set of intensive short courses offered by leaders in the field, designed to introduce students to exciting, current research in mathematics. These lectures do not duplicate standard courses available elsewhere. The courses begin at an introductory level suitable for graduate students and lead up to currently active topics of research. The articles in this volume include introductions to CAT(0) cube complexes and groups, to modern small cancellation theory, to isometry groups of general CAT(0) spaces, and a discussion of nilpotent genus in the context of mapping class groups and CAT(0) gro...

  6. Building up STEM education professional learning community in school setting: Case of Khon Kaen Wittayayon School

    Science.gov (United States)

    Thana, Aduldej; Siripun, Kulpatsorn; Yuenyong, Chokchai

    2018-01-01

    The STEM education is new issue of teaching and learning in school setting. Building up STEM education professional learning community may provide some suggestions for further collaborative work of STEM Education from grounded up. This paper aimed to clarify the building up STEM education learning community in Khon Kaen Wittayayon (KKW) School setting. Participants included Khon Kaen University researchers, Khon Kaen Wittayayon School administrators and teachers. Methodology regarded interpretative paradigm. The tools of interpretation included participant observation, interview and document analysis. Data was analyzed to categories of condition for building up STEM education professional learning community. The findings revealed that the actions of developing STEM learning activities and research showed some issues of KKW STEM community of inquiry and improvement. The paper will discuss what and how the community learns about sharing vision of STEM Education, supportive physical and social conditions of KKW, sharing activities of STEM, and good things from some key STEM teachers' ambition. The paper may has implication of supporting STEM education in Thailand school setting.

  7. Radio resource management using geometric water-filling

    CERN Document Server

    He, Peter; Zhou, Sheng; Niu, Zhisheng

    2014-01-01

    This brief introduces the fundamental theory and development of managing radio resources using a water-filling algorithm that can optimize system performance in wireless communication. Geometric Water-Filling (GWF) is a crucial underlying tool in emerging communication systems such as multiple input multiple output systems, cognitive radio systems, and green communication systems. Early chapters introduce emerging wireless technologies and provide a detailed analysis of water-filling. The brief investigates single user and multi-user issues of radio resource management, allocation of resources

  8. Impacts of traffic and rainfall characteristics on heavy metals build-up and wash-off from urban roads.

    Science.gov (United States)

    Mahbub, Parvez; Ayoko, Godwin A; Goonetilleke, Ashantha; Egodawatta, Prasanna; Kokot, Serge

    2010-12-01

    An investigation into the effects of changes in urban traffic characteristics due to rapid urbanisation and the predicted changes in rainfall characteristics due to climate change on the build-up and wash-off of heavy metals was carried out in Gold Coast, Australia. The study sites encompassed three different urban land uses. Nine heavy metals commonly associated with traffic emissions were selected. The results were interpreted using multivariate data analysis and decision making tools, such as principal component analysis (PCA), fuzzy clustering (FC), PROMETHEE, and GAIA. Initial analyses established high, low, and moderate traffic scenarios as well as low, low to moderate, moderate, high, and extreme rainfall scenarios for build-up and wash-off investigations. GAIA analyses established that moderate to high traffic scenarios could affect the build-up, while moderate to high rainfall scenarios could affect the wash-off of heavy metals under changed conditions. However, in wash-off, metal concentrations in 1-75 μm fraction were found to be independent of the changes to rainfall characteristics. In build-up, high traffic activities in commercial and industrial areas influenced the accumulation of heavy metal concentrations in particulate size range from 75 - >300 μm, whereas metal concentrations in finer size range of 300 μm can be targeted for removal of Ni, Cu, Pb, Cd, Cr, and Zn from build-up, while organic matter from 300 μm can be targeted for removal of Cd, Cr, Pb, and Ni from wash-off. Cu and Zn need to be removed as free ions from most fractions in wash-off.

  9. Three-Dimensional Reconstruction and Solar Energy Potential Estimation of Buildings

    Science.gov (United States)

    Chen, Y.; Li, M.; Cheng, L.; Xu, H.; Li, S.; Liu, X.

    2017-12-01

    In the context of the construction of low-carbon cities, green cities and eco-cities, the ability of the airborne and mobile LiDAR should be explored in urban renewable energy research. As the main landscape in urban environment, buildings have large regular envelopes could receive a huge amount of solar radiation. In this study, a relatively complete calculation scheme about building roof and façade solar utilization potential is proposed, using building three-dimensional geometric feature information. For measuring the city-level building solar irradiance, the precise three-dimensional building roof and façade models should be first reconstructed from the airborne and mobile LiDAR, respectively. In order to obtaining the precise geometric structure of building facades from the mobile LiDAR data, a new method for structure detection and the three-dimensional reconstruction of building façades from mobile LiDAR data is proposed. The method consists of three steps: the preprocessing of façade points, the detection of façade structure, the restoration and reconstruction of building façade. As a result, the reconstruction method can effectively deal with missing areas caused by occlusion, viewpoint limitation, and uneven point density, as well as realizing the highly complete 3D reconstruction of a building façade. Furthermore, the window areas can be excluded for more accurate estimation of solar utilization potential. After then, the solar energy utilization potential of global building roofs and facades is estimate by using the solar irradiance model, which combine the analysis of the building shade and sky diffuse, based on the analysis of the geometrical structure of buildings.

  10. 三维装配几何约束闭环系统的递归分解方法%A Recursive Decomposition Algorithm for 3D Assembly Geometric Constraint System with Closed-loops

    Institute of Scientific and Technical Information of China (English)

    黄学良; 李娜; 陈立平

    2013-01-01

    Numerical methods are always employed to solve 3D assembly geometric constraint system with closed-loops which can not be decomposed by the existing decomposition methods,but their inherent inefficiency and instability can not be overcome.In this paper,with the analysis of the structural constraint of serial kinematic chain and the topological structure of geometric constraint closed-loop graph,a recursive decomposition algorithm for 3D geometric constraint system with closed-loops is proposed.The basic idea of the proposed algorithm is to introduce the equivalent geometric constraint combination to substitute the structural constraint of serial kinematic chain,and separate the geometric constraint subsystems which can be solved independently from the geometric constraint system with closed-loops.The proposed method can decompose most 3D geometric constraint closed-loop systems which are always solved by numerical method into a series of geometric constraint subsystems between two rigid bodies which can be solved by analytical or reasoning method,so that the computational efficiency and stability can be improved dramatically.Finally,a typical example has been given to validate the correctness and effectiveness of the proposed method.%由于现有几何约束分解方法无法分解三维装配几何约束闭环系统,故常采用数值迭代方法对其进行求解,但存在效率低、稳定性差等问题.为此,通过分析几何约束闭环图的拓扑结构和串联运动链的结构约束,提出基于串联运动链结构约束等价替换的三维几何约束闭环系统的递归分解方法.该方法通过不断地引入几何约束组合等价替换串联运动链的结构约束,从几何约束闭环系统中分离出可独立求解的子系统,实现几何约束闭环系统的递归分解.该方法可将此前许多必须整体迭代求解的三维几何约束闭环系统分解为一系列可解析求解的2

  11. Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows.

    Science.gov (United States)

    Rolland, Joran

    2018-02-01

    This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T)=A_{p}r-B_{p}, with A_{p} and B_{p} positive. Moreover, A_{p} and B_{p} are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T≍exp[L(Ar-B)] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and

  12. Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows

    Science.gov (United States)

    Rolland, Joran

    2018-02-01

    This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T ) =Apr -Bp , with Ap and Bp positive. Moreover, Ap and Bp are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T ≍exp[L (A r -B )] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability

  13. Prediction of moisture migration and pore pressure build-up in concrete at high temperatures

    International Nuclear Information System (INIS)

    Ichikawa, Y.; England, G.L.

    2004-01-01

    Prediction of moisture migration and pore pressure build-up in non-uniformly heated concrete is important for safe operation of concrete containment vessels in nuclear power reactors and for assessing the behaviour of fire-exposed concrete structures. (1) Changes in moisture content distribution in a concrete containment vessel during long-term operation should be investigated, since the durability and radiation shielding ability of concrete are strongly influenced by its moisture content. (2) The pressure build-up in a concrete containment vessel in a postulated accident should be evaluated in order to determine whether a venting system is necessary between liner and concrete to relieve the pore pressure. (3) When concrete is subjected to rapid heating during a fire, the concrete can suffer from spalling due to pressure build-up in the concrete pores. This paper presents a mathematical and computational model for predicting changes in temperature, moisture content and pore pressure in concrete at elevated temperatures. A pair of differential equations for one-dimensional heat and moisture transfer in concrete are derived from the conservation of energy and mass, and take into account the temperature-dependent release of gel water and chemically bound water due to dehydration. These equations are numerically solved by the finite difference method. In the numerical analysis, the pressure, density and dynamic viscosity of water in the concrete pores are calculated explicitly from a set of formulated equations. The numerical analysis results are compared with two different sets of experimental data: (a) long-term (531 days) moisture migration test under a steady-state temperature of 200 deg. C, and (b) short-term (114 min) pressure build-up test under transient heating. These experiments were performed to investigate the moisture migration and pressure build-up in the concrete wall of a reactor containment vessel at high temperatures. The former experiment simulated

  14. An Integrated Start-Up Method for Pumped Storage Units Based on a Novel Artificial Sheep Algorithm

    Directory of Open Access Journals (Sweden)

    Zanbin Wang

    2018-01-01

    Full Text Available Pumped storage units (PSUs are an important storage tool for power systems containing large-scale renewable energy, and the merit of rapid start-up enable PSUs to modulate and stabilize the power system. In this paper, PSU start-up strategies have been studied and a new integrated start-up method has been proposed for the purpose of achieving swift and smooth start-up. A two-phase closed-loop startup strategy, composed of switching Proportion Integration (PI and Proportion Integration Differentiation (PID controller is designed, and an integrated optimization scheme is proposed for a synchronous optimization of the parameters in the strategy. To enhance the optimization performance, a novel meta-heuristic called Artificial Sheep Algorithm (ASA is proposed and applied to solve the optimization task after a sufficient verification with seven popular meta-heuristic algorithms and 13 typical benchmark functions. Simulation model has been built for a China’s PSU and comparative experiments are conducted to evaluate the proposed integrated method. Results show that the start-up performance could be significantly improved on both indices on overshoot and start-up, and up to 34%-time consumption has been reduced under different working condition. The significant improvements on start-up of PSU is interesting and meaning for further application on real unit.

  15. From novel mathematics to efficient algorithms. Do we have proper SD foundation to build future?

    Energy Technology Data Exchange (ETDEWEB)

    Tkachov, Fyodor V. [INR RAS, Moscow (Russian Federation)

    2002-08-01

    Computer implementation of sophisticated algorithms for physics applications is greatly facilitated by the new generation of component-oriented SD technologies pioneered by Wirth's Oberon and embraced by the software industry (Sun's Java and Microsoft's C no.). In a stark contrast with the deeply flawed C++, the Oberon technologies (specifically, the Component Pascal and the BlackBox development environment) offer an unparalleled SD platform for building scientific applications from sophisticated data processing to demanding symbolic calculations. Specific examples are discussed. (author)

  16. Effect of heat build-up on carbon emissions in chimato compost piles

    African Journals Online (AJOL)

    A. Mlangeni

    atmospheric carbon compounds such as dioxide (CO2) and methane (CH4) into soil organic carbon compounds. (Biala, 2011; Gill et al., 2012; Biddlestone and Gray,. 1987). Maturity and stability of compost is partly dependent on type of feedstock that influence compost pile moisture content, aeration and heat build up. Well.

  17. Fractal supersymmetric QM, Geometric Probability and the Riemann Hypothesis

    CERN Document Server

    Castro, C

    2004-01-01

    The Riemann's hypothesis (RH) states that the nontrivial zeros of the Riemann zeta-function are of the form $ s_n =1/2+i\\lambda_n $. Earlier work on the RH based on supersymmetric QM, whose potential was related to the Gauss-Jacobi theta series, allows to provide the proper framework to construct the well defined algorithm to compute the probability to find a zero (an infinity of zeros) in the critical line. Geometric probability theory furnishes the answer to the very difficult question whether the probability that the RH is true is indeed equal to unity or not. To test the validity of this geometric probabilistic framework to compute the probability if the RH is true, we apply it directly to the the hyperbolic sine function $ \\sinh (s) $ case which obeys a trivial analog of the RH (the HSRH). Its zeros are equally spaced in the imaginary axis $ s_n = 0 + i n \\pi $. The geometric probability to find a zero (and an infinity of zeros) in the imaginary axis is exactly unity. We proceed with a fractal supersymme...

  18. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  19. A geometric viewpoint on generalized hydrodynamics

    Directory of Open Access Journals (Sweden)

    Benjamin Doyon

    2018-01-01

    Full Text Available Generalized hydrodynamics (GHD is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective (“dressed” velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  20. Control of the lighting system using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Čongradac Velimir D.

    2012-01-01

    Full Text Available The manufacturing, distribution and use of electricity are of fundamental importance for the social life and they have the biggest influence on the environment associated with any human activity. The energy needed for building lighting makes up 20-40% of the total consumption. This paper displays the development of the mathematical model and genetic algorithm for the control of dimmable lighting on problems of regulating the level of internal lighting and increase of energetic efficiency using daylight. A series of experiments using the optimization algorithm on the realized model confirmed very high savings in electricity consumption.

  1. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    Science.gov (United States)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  2. A Collision-Free G2 Continuous Path-Smoothing Algorithm Using Quadratic Polynomial Interpolation

    Directory of Open Access Journals (Sweden)

    Seong-Ryong Chang

    2014-12-01

    Full Text Available Most path-planning algorithms are used to obtain a collision-free path without considering continuity. On the other hand, a continuous path is needed for stable movement. In this paper, the searched path was converted into a G2 continuous path using the modified quadratic polynomial and membership function interpolation algorithm. It is simple, unique and provides a good geometric interpretation. In addition, a collision-checking and improvement algorithm is proposed. The collision-checking algorithm can check the collisions of a smoothed path. If collisions are detected, the collision improvement algorithm modifies the collision path to a collision-free path. The collision improvement algorithm uses a geometric method. This method uses the perpendicular line between a collision position and the collision piecewise linear path. The sub-waypoint is added, and the QPMI algorithm is applied again. As a result, the collision-smoothed path is converted into a collision-free smooth path without changing the continuity.

  3. Descent in buildings (AM-190)

    CERN Document Server

    Mühlherr, Bernhard; Weiss, Richard M

    2015-01-01

    Descent in Buildings begins with the resolution of a major open question about the local structure of Bruhat-Tits buildings. The authors then put their algebraic solution into a geometric context by developing a general fixed point theory for groups acting on buildings of arbitrary type, giving necessary and sufficient conditions for the residues fixed by a group to form a kind of subbuilding or "form" of the original building. At the center of this theory is the notion of a Tits index, a combinatorial version of the notion of an index in the relative theory of algebraic groups. These results

  4. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  5. Preserving the acoustical heritage of historical buildings

    DEFF Research Database (Denmark)

    Rindel, Jens Holger

    2005-01-01

    with the present state building. Next step is to identify what historic conditions that should be reconstructions, especially architectural details, surface materials, obstacles, textiles, performing persons and audience, and typical sounds or music. The computer model is then modified according to these data......The use of virtual reality and auralization techniques has opened new possibilities for the study of acoustical conditions in historic buildings. With reference to some recent and current projects the paper describes the steps in the process and how new technology may be used to overcome some...... of the difficulties involved. If the building exists as a whole or partly the first step is collection of geometrical and architectural data from the site. The level of detail may be huge, e.g. if laser scanning is used for providing the geometrical data, and some degree of simplification may be necessary...

  6. Modelling heavy metals build-up on urban road surfaces for effective stormwater reuse strategy implementation.

    Science.gov (United States)

    Hong, Nian; Zhu, Panfeng; Liu, An

    2017-12-01

    Urban road stormwater is an alternative water resource to mitigate water shortage issues in the worldwide. Heavy metals deposited (build-up) on urban road surface can enter road stormwater runoff, undermining stormwater reuse safety. As heavy metal build-up loads perform high variabilities in terms of spatial distribution and is strongly influenced by surrounding land uses, it is essential to develop an approach to identify hot-spots where stormwater runoff could include high heavy metal concentrations and hence cannot be reused if it is not properly treated. This study developed a robust modelling approach to estimating heavy metal build-up loads on urban roads using land use fractions (representing percentages of land uses within a given area) by an artificial neural network (ANN) model technique. Based on the modelling results, a series of heavy metal load spatial distribution maps and a comprehensive ecological risk map were generated. These maps provided a visualization platform to identify priority areas where the stormwater can be safely reused. Additionally, these maps can be utilized as an urban land use planning tool in the context of effective stormwater reuse strategy implementation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Processing ultrasonic inspection data from multiple scan patterns for turbine rotor weld build-up evaluations

    Science.gov (United States)

    Guan, Xuefei; Rasselkorde, El Mahjoub; Abbasi, Waheed; Zhou, S. Kevin

    2015-03-01

    The study presents a data processing methodology for weld build-up using multiple scan patterns. To achieve an overall high probability of detection for flaws with different orientations, an inspection procedure with three different scan patterns is proposed. The three scan patterns are radial-tangential longitude wave pattern, axial-radial longitude wave pattern, and tangential shear wave pattern. Scientific fusion of the inspection data is implemented using volume reconstruction techniques. The idea is to perform spatial domain forward data mapping for all sampling points. A conservative scheme is employed to handle the case that multiple sampling points are mapped to one grid location. The scheme assigns the maximum value for the grid location to retain the largest equivalent reflector size for the location. The methodology is demonstrated and validated using a realistic ring of weld build-up. Tungsten balls and bars are embedded to the weld build-up during manufacturing process to represent natural flaws. Flat bottomed holes and side drilled holes are installed as artificial flaws. Automatic flaw identification and extraction are demonstrated. Results indicate the inspection procedure with multiple scan patterns can identify all the artificial and natural flaws.

  8. A Geometric Dictionary Learning Based Approach for Fluorescence Spectroscopy Image Fusion

    Directory of Open Access Journals (Sweden)

    Zhiqin Zhu

    2017-02-01

    Full Text Available In recent years, sparse representation approaches have been integrated into multi-focus image fusion methods. The fused images of sparse-representation-based image fusion methods show great performance. Constructing an informative dictionary is a key step for sparsity-based image fusion method. In order to ensure sufficient number of useful bases for sparse representation in the process of informative dictionary construction, image patches from all source images are classified into different groups based on geometric similarities. The key information of each image-patch group is extracted by principle component analysis (PCA to build dictionary. According to the constructed dictionary, image patches are converted to sparse coefficients by simultaneous orthogonal matching pursuit (SOMP algorithm for representing the source multi-focus images. At last the sparse coefficients are fused by Max-L1 fusion rule and inverted to fused image. Due to the limitation of microscope, the fluorescence image cannot be fully focused. The proposed multi-focus image fusion solution is applied to fluorescence imaging area for generating all-in-focus images. The comparison experimentation results confirm the feasibility and effectiveness of the proposed multi-focus image fusion solution.

  9. Shaping up: a geometric morphometric approach to assemblage ecomorphology.

    Science.gov (United States)

    Bower, L M; Piller, K R

    2015-09-01

    This study adopts an ecomorphological approach to test the utility of body shape as a predictor of niche relationships among a stream fish assemblage of the Tickfaw River (Lake Pontchartrain Basin) in southeastern Louisiana, U.S.A. To examine the potential influence of evolutionary constraints, analyses were performed with and without the influence of phylogeny. Fish assemblages were sampled throughout the year, and ecological data (habitat and tropic guild) and body shape (geometric morphometric) data were collected for each fish specimen. Multivariate analyses were performed to examine relationships and differences between body shape and ecological data. Results indicate that a relationship exists between body shape and trophic guild as well as flow regime, but no significant correlation between body shape and substratum was found. Body shape was a reliable indicator of position within assemblage niche space. © 2015 The Fisheries Society of the British Isles.

  10. Parallel Sn Sweeps on Unstructured Grids: Algorithms for Prioritization, Grid Partitioning, and Cycle Detection

    International Nuclear Information System (INIS)

    Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence

    2005-01-01

    The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster

  11. Geometric covers, graph orientations, counter games

    DEFF Research Database (Denmark)

    Berglin, Edvin

    -directed graph is dynamic (can be altered by some outside actor), some orientations may need to be reversed in order to maintain the low out-degree. We present a new algorithm that is simpler than earlier work, yet matches or outperforms the efficiency of these results with very few exceptions. Counter games...... example is Line Cover, also known as Point-Line Cover, where a set of points in a geometric space are to be covered by placing a restricted number of lines. We present new FPT algorithms for the sub-family Curve Cover (which includes Line Cover), as well as for Hyperplane Cover restricted to R 3 (i...... are a type of abstract game played over a set of counters holding values, and these values may be moved between counters according to some set of rules. Typically they are played between two players: the adversary who tries to concentrate the greatest value possible in a single counter, and the benevolent...

  12. Start-up flow in a three-dimensional lid-driven cavity by means of a massively parallel direction splitting algorithm

    KAUST Repository

    Guermond, J. L.; Minev, P. D.

    2011-01-01

    The purpose of this paper is to validate a new highly parallelizable direction splitting algorithm. The parallelization capabilities of this algorithm are illustrated by providing a highly accurate solution for the start-up flow in a three

  13. Geometric projection filter: an efficient solution to the SLAM problem

    Science.gov (United States)

    Newman, Paul M.; Durrant-Whyte, Hugh F.

    2001-10-01

    This paper is concerned with the simultaneous localization and map building (SLAM) problem. The SLAM problem asks if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location. Conventional approaches to this problem are plagued with a prohibitively large increase in computation with the size of the environment. This paper offers a new solution to the SLAM problem that is both consistent and computationally feasible. The proposed algorithm builds a map expressing the relationships between landmarks which is then transformed into landmark locations. Experimental results are presented employing the new algorithm on a subsea vehicle using a scanning sonar sensor.

  14. Salt bridges: geometrically specific, designable interactions.

    Science.gov (United States)

    Donald, Jason E; Kulp, Daniel W; DeGrado, William F

    2011-03-01

    Salt bridges occur frequently in proteins, providing conformational specificity and contributing to molecular recognition and catalysis. We present a comprehensive analysis of these interactions in protein structures by surveying a large database of protein structures. Salt bridges between Asp or Glu and His, Arg, or Lys display extremely well-defined geometric preferences. Several previously observed preferences are confirmed, and others that were previously unrecognized are discovered. Salt bridges are explored for their preferences for different separations in sequence and in space, geometric preferences within proteins and at protein-protein interfaces, co-operativity in networked salt bridges, inclusion within metal-binding sites, preference for acidic electrons, apparent conformational side chain entropy reduction on formation, and degree of burial. Salt bridges occur far more frequently between residues at close than distant sequence separations, but, at close distances, there remain strong preferences for salt bridges at specific separations. Specific types of complex salt bridges, involving three or more members, are also discovered. As we observe a strong relationship between the propensity to form a salt bridge and the placement of salt-bridging residues in protein sequences, we discuss the role that salt bridges might play in kinetically influencing protein folding and thermodynamically stabilizing the native conformation. We also develop a quantitative method to select appropriate crystal structure resolution and B-factor cutoffs. Detailed knowledge of these geometric and sequence dependences should aid de novo design and prediction algorithms. Copyright © 2010 Wiley-Liss, Inc.

  15. Effect of heat build-up on carbon emissions in chimato compost piles ...

    African Journals Online (AJOL)

    A study was conducted to determine impacts of heat build-up of chimato compost piles TD0, TD20, TD40, TD50, TD60, TD80 and TD100, made by blending maize stalks with 0, 20, 40, 50, 60, 80 and 100% Tithonia diversifolia, respectively, on carbon losses and emissions during composting. Compost piles temperatures ...

  16. A GLOBAL SOLUTION TO TOPOLOGICAL RECONSTRUCTION OF BUILDING ROOF MODELS FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    J. Yan

    2016-06-01

    Full Text Available This paper presents a global solution to building roof topological reconstruction from LiDAR point clouds. Starting with segmented roof planes from building LiDAR points, a BSP (binary space partitioning algorithm is used to partition the bounding box of the building into volumetric cells, whose geometric features and their topology are simultaneously determined. To resolve the inside/outside labelling problem of cells, a global energy function considering surface visibility and spatial regularization between adjacent cells is constructed and minimized via graph cuts. As a result, the cells are labelled as either inside or outside, where the planar surfaces between the inside and outside form the reconstructed building model. Two LiDAR data sets of Yangjiang (China and Wuhan University (China are used in the study. Experimental results show that the completeness of reconstructed roof planes is 87.5%. Comparing with existing data-driven approaches, the proposed approach is global. Roof faces and edges as well as their topology can be determined at one time via minimization of an energy function. Besides, this approach is robust to partial absence of roof planes and tends to reconstruct roof models with visibility-consistent surfaces.

  17. History matching of transient pressure build-up in a simulation model using adjoint method

    Energy Technology Data Exchange (ETDEWEB)

    Ajala, I.; Haekal, Rachmat; Ganzer, L. [Technische Univ. Clausthal, Clausthal-Zellerfeld (Germany); Almuallim, H. [Firmsoft Technologies, Inc., Calgary, AB (Canada); Schulze-Riegert, R. [SPT Group GmbH, Hamburg (Germany)

    2013-08-01

    The aim of this work is the efficient and computer-assisted history-matching of pressure build-up and pressure derivatives by small modification to reservoir rock properties on a grid by grid level. (orig.)

  18. Artefacts in geometric phase analysis of compound materials.

    Science.gov (United States)

    Peters, Jonathan J P; Beanland, Richard; Alexe, Marin; Cockburn, John W; Revin, Dmitry G; Zhang, Shiyong Y; Sanchez, Ana M

    2015-10-01

    The geometric phase analysis (GPA) algorithm is known as a robust and straightforward technique that can be used to measure lattice strains in high resolution transmission electron microscope (TEM) images. It is also attractive for analysis of aberration-corrected scanning TEM (ac-STEM) images that resolve every atom column, since it uses Fourier transforms and does not require real-space peak detection and assignment to appropriate sublattices. Here it is demonstrated that, in ac-STEM images of compound materials with compositionally distinct atom columns, an additional geometric phase is present in the Fourier transform. If the structure changes from one area to another in the image (e.g. across an interface), the change in this additional phase will appear as a strain in conventional GPA, even if there is no lattice strain. Strategies to avoid this pitfall are outlined. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. French experience to reduce radiation field build-up and improve nuclear fuel performance

    International Nuclear Information System (INIS)

    Thomazet, J.; Beslu, P.; Noe, M.; Stora, J.P.

    1983-01-01

    Over these last years, considerable information has been obtained on primary coolant chemistry, activity build-up and nuclear fuel behavior. As of December 1982, twenty three 900 MWe type reactors were in operation in France and about 1.3 millions of rods had been loaded in power reactors among which six regions of 17x17 fuel assemblies had completed successfully their third cycle of irradiation with a lead assembly burn-up of 37,000 MWd/MtU. Visual examination shows that crud deposited on fuel clads is mostly thin or inexistent. This result is due to the appropriate B/Li coolant concentration control which is currently applied in French reactors since several years. Correlatively, radiation field build-up is minimized and excessive external corrosion has never been observed. Nevertheless for higher coolant temperature plants, where occurrence of nucleate boiling could increase crud deposition, and for load follow and high burn-up operation, an extensive programme is performed jointly by Commissariat a l'Energie Atomique (CEA), Electricite de France, FRAMATOME and FRAGEMA to reduce even more the radiation field. This programme, described in the paper, includes: loop tests; on site chemical and radiochemical surveys; radiation field measurements; on site fuel examination crud-scrapping, crud analysis and oxide thickness measurements; hot cells examination. Some key results are presented and discussed in this paper. (author)

  20. Source term evaluation model for high-level radioactive waste repository with decay chain build-up.

    Science.gov (United States)

    Chopra, Manish; Sunny, Faby; Oza, R B

    2016-09-18

    A source term model based on two-component leach flux concept is developed for a high-level radioactive waste repository. The long-lived radionuclides associated with high-level waste may give rise to the build-up of activity because of radioactive decay chains. The ingrowths of progeny are incorporated in the model using Bateman decay chain build-up equations. The model is applied to different radionuclides present in the high-level radioactive waste, which form a part of decay chains (4n to 4n + 3 series), and the activity of the parent and daughter radionuclides leaching out of the waste matrix is estimated. Two cases are considered: one when only parent is present initially in the waste and another where daughters are also initially present in the waste matrix. The incorporation of in situ production of daughter radionuclides in the source is important to carry out realistic estimates. It is shown that the inclusion of decay chain build-up is essential to avoid underestimation of the radiological impact assessment of the repository. The model can be a useful tool for evaluating the source term of the radionuclide transport models used for the radiological impact assessment of high-level radioactive waste repositories.

  1. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  2. Comparison of Different Classification Algorithms for the Detection of User's Interaction with Windows in Office Buildings

    DEFF Research Database (Denmark)

    Markovic, Romana; Wolf, Sebastian; Cao, Jun

    2017-01-01

    Occupant behavior in terms of interactions with windows and heating systems is seen as one of the main sources of discrepancy between predicted and measured heating, ventilation and air conditioning (HVAC) building energy consumption. Thus, this work analyzes the performance of several...... classification algorithms for detecting occupant's interactions with windows, while taking the imbalanced properties of the available data set into account. The tested methods include support vector machines (SVM), random forests, and their combination with dynamic Bayesian networks (DBN). The results will show...

  3. Time Series Analysis Using Geometric Template Matching.

    Science.gov (United States)

    Frank, Jordan; Mannor, Shie; Pineau, Joelle; Precup, Doina

    2013-03-01

    We present a novel framework for analyzing univariate time series data. At the heart of the approach is a versatile algorithm for measuring the similarity of two segments of time series called geometric template matching (GeTeM). First, we use GeTeM to compute a similarity measure for clustering and nearest-neighbor classification. Next, we present a semi-supervised learning algorithm that uses the similarity measure with hierarchical clustering in order to improve classification performance when unlabeled training data are available. Finally, we present a boosting framework called TDEBOOST, which uses an ensemble of GeTeM classifiers. TDEBOOST augments the traditional boosting approach with an additional step in which the features used as inputs to the classifier are adapted at each step to improve the training error. We empirically evaluate the proposed approaches on several datasets, such as accelerometer data collected from wearable sensors and ECG data.

  4. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  5. Algorithms, architectures and information systems security

    CERN Document Server

    Sur-Kolay, Susmita; Nandy, Subhas C; Bagchi, Aditya

    2008-01-01

    This volume contains articles written by leading researchers in the fields of algorithms, architectures, and information systems security. The first five chapters address several challenging geometric problems and related algorithms. These topics have major applications in pattern recognition, image analysis, digital geometry, surface reconstruction, computer vision and in robotics. The next five chapters focus on various optimization issues in VLSI design and test architectures, and in wireless networks. The last six chapters comprise scholarly articles on information systems security coverin

  6. Electron Cloud Build Up and Instability in the CLIC Damping Rings

    CERN Document Server

    Rumolo, G; Papaphilippou, Y

    2008-01-01

    Electron cloud can be formed in the CLIC positron damping ring and cause intolerable tune shift and beam instability. Build up simulations with the Faktor2 code, developed at CERN, have been done to predict the cloud formation in the arcs and wigglers of the damping rings. HEADTAIL simulations have been used to study the effect of this electron cloud on the beam and assess the thresholds above which the electron cloud instability would set in.

  7. Start-up flow in a three-dimensional lid-driven cavity by means of a massively parallel direction splitting algorithm

    KAUST Repository

    Guermond, J. L.

    2011-05-04

    The purpose of this paper is to validate a new highly parallelizable direction splitting algorithm. The parallelization capabilities of this algorithm are illustrated by providing a highly accurate solution for the start-up flow in a three-dimensional impulsively started lid-driven cavity of aspect ratio 1×1×2 at Reynolds numbers 1000 and 5000. The computations are done in parallel (up to 1024 processors) on adapted grids of up to 2 billion nodes in three space dimensions. Velocity profiles are given at dimensionless times t=4, 8, and 12; at least four digits are expected to be correct at Re=1000. © 2011 John Wiley & Sons, Ltd.

  8. Radiation Build-Up In Shielding Of Low Activity High Energia Gamma Source

    International Nuclear Information System (INIS)

    Helfi-Yuliati; Mukhlis-Akhadi

    2003-01-01

    Research to observe radiation build-up factor (b) in aluminium (Al), iron (Fe) and lead (Pb) for shielding of gamma radiation of high energy from 137 cs (E γ : 662 keV) source and 60 Co (E γ : 1332 keV) of low activity sources has been carried out. Al with Z =13 represent metal of low atomic number, Fe with Z =26 represent metal of medium atomic number, and Pb with Z = 82 represent metal of high atomic number. Low activity source in this research is source which if its dose rate decrease to 3 % of its initial dose rate became safe for the workers. Research was conducted by counting of radiation intensity behind shielding with its thickness vary from 1 to 5 times of half value thickness (HVT). NaI(TI) detector which connected to multi channel analyzer (MCA) was used for the counting. Calculation result show that all of b value are close to 1 (b ∼ 1) for all kinds of metals. No radiation build-up factor is required in estimating the shielding thickness from several kinds of metals for low activity of high energy gamma source. (author)

  9. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  10. Certain integrable system on a space associated with a quantum search algorithm

    International Nuclear Information System (INIS)

    Uwano, Y.; Hino, H.; Ishiwatari, Y.

    2007-01-01

    On thinking up a Grover-type quantum search algorithm for an ordered tuple of multiqubit states, a gradient system associated with the negative von Neumann entropy is studied on the space of regular relative configurations of multiqubit states (SR 2 CMQ). The SR 2 CMQ emerges, through a geometric procedure, from the space of ordered tuples of multiqubit states for the quantum search. The aim of this paper is to give a brief report on the integrability of the gradient dynamical system together with quantum information geometry of the underlying space, SR 2 CMQ, of that system

  11. Effect of curing mode on the hardness of dual-cured composite resin core build-up materials

    Directory of Open Access Journals (Sweden)

    César Augusto Galvão Arrais

    2010-06-01

    Full Text Available This study evaluated the Knoop Hardness (KHN values of two dual-cured composite resin core build-up materials and one resin cement exposed to different curing conditions. Two dual-cured core build-up composite resins (LuxaCore®-Dual, DMG; and FluoroCore®2, Dentsply Caulk, and one dual-cured resin cement (Rely X ARC, 3M ESPE were used in the present study. The composite materials were placed into a cylindrical matrix (2 mm in height and 3 mm in diameter, and the specimens thus produced were either light-activated for 40 s (Optilux 501, Demetron Kerr or were allowed to self-cure for 10 min in the dark (n = 5. All specimens were then stored in humidity at 37°C for 24 h in the dark and were subjected to KHN analysis. The results were submitted to 2-way ANOVA and Tukey's post-hoc test at a pre-set alpha of 5%. All the light-activated groups exhibited higher KHN values than the self-cured ones (p = 0.00001, regardless of product. Among the self-cured groups, both composite resin core build-up materials showed higher KHN values than the dual-cured resin cement (p = 0.00001. LuxaCore®-Dual exhibited higher KHN values than FluoroCore®2 (p = 0.00001 when they were allowed to self-cure, while no significant differences in KHN values were observed among the light-activated products. The results suggest that dual-cured composite resin core build-up materials may be more reliable than dual-cured resin cements when curing light is not available.

  12. Iterative model-building, structure refinement, and density modification with the PHENIX AutoBuild Wizard

    Energy Technology Data Exchange (ETDEWEB)

    Los Alamos National Laboratory, Mailstop M888, Los Alamos, NM 87545, USA; Lawrence Berkeley National Laboratory, One Cyclotron Road, Building 64R0121, Berkeley, CA 94720, USA; Department of Haematology, University of Cambridge, Cambridge CB2 0XY, England; Terwilliger, Thomas; Terwilliger, T.C.; Grosse-Kunstleve, Ralf Wilhelm; Afonine, P.V.; Moriarty, N.W.; Zwart, P.H.; Hung, L.-W.; Read, R.J.; Adams, P.D.

    2007-04-29

    The PHENIX AutoBuild Wizard is a highly automated tool for iterative model-building, structure refinement and density modification using RESOLVE or TEXTAL model-building, RESOLVE statistical density modification, and phenix.refine structure refinement. Recent advances in the AutoBuild Wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model completion algorithms, and automated solvent molecule picking. Model completion algorithms in the AutoBuild Wizard include loop-building, crossovers between chains in different models of a structure, and side-chain optimization. The AutoBuild Wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 {angstrom} to 3.2 {angstrom}, resulting in a mean R-factor of 0.24 and a mean free R factor of 0.29. The R-factor of the final model is dependent on the quality of the starting electron density, and relatively independent of resolution.

  13. Concerning the sound insulation of building elements made up of light concretes. [acoustic absorption efficiency calculations

    Science.gov (United States)

    Giurgiu, I. I.

    1974-01-01

    The sound insulating capacity of building elements made up of light concretes is considered. Analyzing differentially the behavior of light concrete building elements under the influence of incident acoustic energy and on the basis of experimental measurements, coefficients of correction are introduced into the basic formulas for calculating the sound insulating capacity for the 100-3,2000 Hz frequency band.

  14. Influence of Pressure Build-Up Time of Compression Chamber on Improving the Operation Frequency of a Single-Piston Hydraulic Free-Piston Engine

    Directory of Open Access Journals (Sweden)

    Hai-bo Xie

    2013-01-01

    Full Text Available A single-piston hydraulic free-piston engine with a two-cylinder four-stroke diesel engine as its driver is introduced. It takes the free-piston assembly a certain time to move after the pressure in the compression chamber starts to increase. The time difference between the pressure increasing and the piston starting to move is defined as the pressure build-up time. The characteristics of the pressure build-up time and its influence on the performance of the free-piston engine are introduced and analyzed. Based on the basic law of dynamics of the free-piston assembly, the parameters which influence the pressure build-up time are analyzed. And then improvement and optimization are proposed to shorten the pressure build-up time.

  15. Geometric singular perturbation analysis of systems with friction

    DEFF Research Database (Denmark)

    Bossolini, Elena

    This thesis is concerned with the application of geometric singular perturbation theory to mechanical systems with friction. The mathematical background on geometric singular perturbation theory, on the blow-up method, on non-smooth dynamical systems and on regularization is presented. Thereafter......, two mechanical problems with two different formulations of the friction force are introduced and analysed. The first mechanical problem is a one-dimensional spring-block model describing earthquake faulting. The dynamics of earthquakes is naturally a multiple timescale problem: the timescale...... scales. The action of friction is generally explained as the loss and restoration of linkages between the surface asperities at the molecular scale. However, the consequences of friction are noticeable at much larger scales, like hundreds of kilometers. By using geometric singular perturbation theory...

  16. Seismic performance for vertical geometric irregularity frame structures

    Science.gov (United States)

    Ismail, R.; Mahmud, N. A.; Ishak, I. S.

    2018-04-01

    This research highlights the result of vertical geometric irregularity frame structures. The aid of finite element analysis software, LUSAS was used to analyse seismic performance by focusing particularly on type of irregular frame on the differences in height floors and continued in the middle of the building. Malaysia’s building structures were affected once the earthquake took place in the neighbouring country such as Indonesia (Sumatera Island). In Malaysia, concrete is widely used in building construction and limited tension resistance to prevent it. Analysing structural behavior with horizontal and vertical static load is commonly analyses by using the Plane Frame Analysis. The case study of this research is to determine the stress and displacement in the seismic response under this type of irregular frame structures. This study is based on seven-storey building of Clinical Training Centre located in Sungai Buloh, Selayang, Selangor. Since the largest earthquake occurs in Acheh, Indonesia on December 26, 2004, the data was recorded and used in conducting this research. The result of stress and displacement using IMPlus seismic analysis in LUSAS Modeller Software under the seismic response of a formwork frame system states that the building is safe to withstand the ground and in good condition under the variation of seismic performance.

  17. SEMANTIC SEGMENTATION OF BUILDING ELEMENTS USING POINT CLOUD HASHING

    Directory of Open Access Journals (Sweden)

    M. Chizhova

    2018-05-01

    Full Text Available For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect into different building types and structural elements (dome, nave, transept etc., including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling.

  18. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Science.gov (United States)

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  19. MyShake: Building a smartphone seismic network

    Science.gov (United States)

    Kong, Q.; Allen, R. M.; Schreier, L.

    2014-12-01

    We are in the process of building up a smartphone seismic network. In order to build this network, we did shake table tests to evaluate the performance of the smartphones as seismic recording instruments. We also conducted noise floor test to find the minimum earthquake signal we can record using smartphones. We added phone noises to the strong motion data from past earthquakes, and used these as an analogy dataset to test algorithms and to understand the difference of using the smartphone network and the traditional seismic network. We also built a prototype system to trigger the smartphones from our server to record signals which can be sent back to the server in near real time. The phones can also be triggered by our developed algorithm running locally on the phone, if there's an earthquake occur to trigger the phones, the signal recorded by the phones will be sent back to the server. We expect to turn the prototype system into a real smartphone seismic network to work as a supplementary network to the existing traditional seismic network.

  20. Iterative model building, structure refinement and density modification with the PHENIX AutoBuild wizard

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Grosse-Kunstleve, Ralf W.; Afonine, Pavel V.; Moriarty, Nigel W.; Zwart, Peter H.; Hung, Li-Wei; Read, Randy J.; Adams, Paul D.

    2008-01-01

    The highly automated PHENIX AutoBuild wizard is described. The procedure can be applied equally well to phases derived from isomorphous/anomalous and molecular-replacement methods. The PHENIX AutoBuild wizard is a highly automated tool for iterative model building, structure refinement and density modification using RESOLVE model building, RESOLVE statistical density modification and phenix.refine structure refinement. Recent advances in the AutoBuild wizard and phenix.refine include automated detection and application of NCS from models as they are built, extensive model-completion algorithms and automated solvent-molecule picking. Model-completion algorithms in the AutoBuild wizard include loop building, crossovers between chains in different models of a structure and side-chain optimization. The AutoBuild wizard has been applied to a set of 48 structures at resolutions ranging from 1.1 to 3.2 Å, resulting in a mean R factor of 0.24 and a mean free R factor of 0.29. The R factor of the final model is dependent on the quality of the starting electron density and is relatively independent of resolution

  1. Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points

    Science.gov (United States)

    Jarzabek-Rychard, M.

    2012-07-01

    The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.

  2. Can the BMS Algorithm Decode Up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor Errors? Yes, but with Some Additional Remarks

    Science.gov (United States)

    Sakata, Shojiro; Fujisawa, Masaya

    It is a well-known fact [7], [9] that the BMS algorithm with majority voting can decode up to half the Feng-Rao designed distance dFR. Since dFR is not smaller than the Goppa designed distance dG, that algorithm can correct up to \\lfloor \\frac{d_G-1}{2}\\rfloor errors. On the other hand, it has been considered to be evident that the original BMS algorithm (without voting) [1], [2] can correct up to \\lfloor \\frac{d_G-g-1}{2}\\rfloor errors similarly to the basic algorithm by Skorobogatov-Vladut. But, is it true? In this short paper, we show that it is true, although we need a few remarks and some additional procedures for determining the Groebner basis of the error locator ideal exactly. In fact, as the basic algorithm gives a set of polynomials whose zero set contains the error locators as a subset, it cannot always give the exact error locators, unless the syndrome equation is solved to find the error values in addition.

  3. Parallel Execution of Functional Mock-up Units in Buildings Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-06-30

    A Functional Mock-up Interface (FMI) defines a standardized interface to be used in computer simulations to develop complex cyber-physical systems. FMI implementation by a software modeling tool enables the creation of a simulation model that can be interconnected, or the creation of a software library called a Functional Mock-up Unit (FMU). This report describes an FMU wrapper implementation that imports FMUs into a C++ environment and uses an Euler solver that executes FMUs in parallel using Open Multi-Processing (OpenMP). The purpose of this report is to elucidate the runtime performance of the solver when a multi-component system is imported as a single FMU (for the whole system) or as multiple FMUs (for different groups of components as sub-systems). This performance comparison is conducted using two test cases: (1) a simple, multi-tank problem; and (2) a more realistic use case based on the Modelica Buildings Library. In both test cases, the performance gains are promising when each FMU consists of a large number of states and state events that are wrapped in a single FMU. Load balancing is demonstrated to be a critical factor in speeding up parallel execution of multiple FMUs.

  4. A geometric view on learning Bayesian network structures

    Czech Academy of Sciences Publication Activity Database

    Studený, Milan; Vomlel, Jiří; Hemmecke, R.

    2010-01-01

    Roč. 51, č. 5 (2010), s. 578-586 ISSN 0888-613X. [PGM 2008] R&D Projects: GA AV ČR(CZ) IAA100750603; GA MŠk(CZ) 1M0572; GA ČR GA201/08/0539 Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : learning Bayesian networks * standard imset * inclusion neighborhood * geometric neighborhood * GES algorithm Subject RIV: BA - General Mathematics Impact factor: 1.679, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-0342804. pdf

  5. Research on geometric rectification of the Large FOV Linear Array Whiskbroom Image

    Science.gov (United States)

    Liu, Dia; Liu, Hui-tong; Dong, Hao; Liu, Xiao-bo

    2015-08-01

    To solve the geometric distortion problem of large FOV linear array whiskbroom image, a model of multi center central projection collinearity equation was founded considering its whiskbroom and linear CCD imaging feature, and the principle of distortion was analyzed. Based on the rectification method with POS, we introduced the angular position sensor data of the servo system, and restored the geometric imaging process exactly. An indirect rectification scheme aiming at linear array imaging with best scanline searching method was adopted, matrixes for calculating the exterior orientation elements was redesigned. We improved two iterative algorithms for this device, and did comparison and analysis. The rectification for the images of airborne imaging experiment showed ideal effect.

  6. Visualizing the Geometric Series.

    Science.gov (United States)

    Bennett, Albert B., Jr.

    1989-01-01

    Mathematical proofs often leave students unconvinced or without understanding of what has been proved, because they provide no visual-geometric representation. Presented are geometric models for the finite geometric series when r is a whole number, and the infinite geometric series when r is the reciprocal of a whole number. (MNS)

  7. Geometric analysis

    CERN Document Server

    Bray, Hubert L; Mazzeo, Rafe; Sesum, Natasa

    2015-01-01

    This volume includes expanded versions of the lectures delivered in the Graduate Minicourse portion of the 2013 Park City Mathematics Institute session on Geometric Analysis. The papers give excellent high-level introductions, suitable for graduate students wishing to enter the field and experienced researchers alike, to a range of the most important areas of geometric analysis. These include: the general issue of geometric evolution, with more detailed lectures on Ricci flow and Kähler-Ricci flow, new progress on the analytic aspects of the Willmore equation as well as an introduction to the recent proof of the Willmore conjecture and new directions in min-max theory for geometric variational problems, the current state of the art regarding minimal surfaces in R^3, the role of critical metrics in Riemannian geometry, and the modern perspective on the study of eigenfunctions and eigenvalues for Laplace-Beltrami operators.

  8. Inverse Kinematics for Industrial Robots using Conformal Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Adam L. Kleppe

    2016-01-01

    Full Text Available This paper shows how the recently developed formulation of conformal geometric algebra can be used for analytic inverse kinematics of two six-link industrial manipulators with revolute joints. The paper demonstrates that the solution of the inverse kinematics in this framework relies on the intersection of geometric objects like lines, circles, planes and spheres, which provides the developer with valuable geometric intuition about the problem. It is believed that this will be very useful for new robot geometries and other mechanisms like cranes and topside drilling equipment. The paper extends previous results on inverse kinematics using conformal geometric algebra by providing consistent solutions for the joint angles for the different configurations depending on shoulder left or right, elbow up or down, and wrist flipped or not. Moreover, it is shown how to relate the solution to the Denavit-Hartenberg parameters of the robot. The solutions have been successfully implemented and tested extensively over the whole workspace of the manipulators.

  9. Simultaneous correction of attenuation and geometric response in emission tomography applied to nuclear waste drums

    International Nuclear Information System (INIS)

    Thierry, Raphael

    1999-01-01

    Multi-photonic emission tomography is a non destructive technique applied to the control of radioactive waste drums. The emitted gamma rays are detected on the range [50 keV, 2 MeV] by a hyper pure germanium, of high resolution in energy, which enables to set up a detailed list of radionuclides contained within the drum. From different points of measurement located in a transaxial plane of the drum, the activity distribution is computed by a reconstruction algorithm. An algebraic modelling of the physical process has been developed in order to correct the different degrading phenomenon, in particular the attenuation and the detector geometric response. Attenuation through the materials constituting the barrel is the preponderant phenomena. Its ignorance prevents from accurate activity quantification. Its correction has been realised from an attenuation map obtained by a transmission tomograph. The detector geometric response, introducing a blurring within the detection, is compensated by an analytic model. An adequate modelling of those phenomenon is primordial: it highly contributes on a large scale the image quality and the quantification. The image reconstruction, requiring the resolution of sparse linear system, is realised by iterative algorithms. Due to the 'ill-posed' nature of tomographic reconstruction, it is necessary to use regularisation: by introducing an a priori information on the solution, the stabilisation of the methods is carried out. We chose to minimise the Maximum A Posteriori criterion. Its resolution is considered with a half-quadratic regularisation: it permits the preservation of natural discontinuities, and avoids global-over smoothing of the image. It is evaluated on real phantoms and waste drums. Efficient sampling of the data is considered. (author) [fr

  10. The keys of the kingdom as paradigm for building up the church in reformed church government

    Directory of Open Access Journals (Sweden)

    A. le R. du Plooy

    1998-06-01

    Full Text Available This article adopts an ecclesiological approach and concentrates on the prominent concepts the keys of the kingdom and building up the church. The article attempts to determine the significance those concepts may have for the government of the church and emphasises the close relationship between the keys of the kingdom and the building up of the church. According to Reformational viewpoints the administering of the keys serves the edification of the church. It becomes clear that the notae ecclesiae and the keys of the kingdom function as the basic elements of the church order and must be regarded as the basis or pillars upon which the church is built.

  11. Quasirandom geometric networks from low-discrepancy sequences

    Science.gov (United States)

    Estrada, Ernesto

    2017-08-01

    We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.

  12. On bivariate geometric distribution

    Directory of Open Access Journals (Sweden)

    K. Jayakumar

    2013-05-01

    Full Text Available Characterizations of bivariate geometric distribution using univariate and bivariate geometric compounding are obtained. Autoregressive models with marginals as bivariate geometric distribution are developed. Various bivariate geometric distributions analogous to important bivariate exponential distributions like, Marshall-Olkin’s bivariate exponential, Downton’s bivariate exponential and Hawkes’ bivariate exponential are presented.

  13. From the ground up: building a minimally invasive aortic valve surgery program.

    Science.gov (United States)

    Nguyen, Tom C; Lamelas, Joseph

    2015-03-01

    Minimally invasive aortic valve replacement (MIAVR) is associated with numerous advantages including improved patient satisfaction, cosmesis, decreased transfusion requirements, and cost-effectiveness. Despite these advantages, little information exists on how to build a MIAVR program from the ground up. The steps to build a MIAVR program include compiling a multi-disciplinary team composed of surgeons, cardiologists, anesthesiologists, perfusionists, operating room (OR) technicians, and nurses. Once assembled, this team can then approach hospital administrators to present a cost-benefit analysis of MIAVR, emphasizing the importance of reduced resource utilization in the long-term to offset the initial financial investment that will be required. With hospital approval, training can commence to provide surgeons and other staff with the necessary knowledge and skills in MIAVR procedures and outcomes. Marketing and advertising of the program through the use of social media, educational conferences, grand rounds, and printed media will attract the initial patients. A dedicated website for the program can function as a "virtual lobby" for patients wanting to learn more. Initially, conservative selection criteria of cases that qualify for MIAVR will set the program up for success by avoiding complex co-morbidities and surgical techniques. During the learning curve phase of the program, patient safety should be a priority.

  14. Numerical Simulation of rivulet build up via lubrication equations

    Science.gov (United States)

    Suzzi, N.; Croce, G.

    2017-11-01

    A number of engineering problems involve the evolution of a thin layer of liquid over a non-wettable substrate. For example, CO2 chemical absorption is carried out in packed columns, where post-combustion CO2 flows up while liquid solvent falls down through a collection of corrugated sheets. Further application include, among others, in-flight icing simulations, moisture condensation on de-humidifier fins, fogging build up and removal. Here, we present a development of an in-house code solving numerically the 2D lubrication equation for a film flowing down an inclined plate. The disjoining pressure approach is followed, in order to model both the contact line discontinuity and the surface wettability. With respect to the original implementation, the full modeling of capillary pressure terms according to Young- Laplace relation allows to investigate contact angles close to π/2. The code is thus validated with literature numerical results, obtained by a fully 3D approach (VOF), showing satisfying agreement despite a strong reduction in terms of computational cost. Steady and unsteady wetting dynamics of a developing rivulet are investigated (and validated) under different load conditions and for different values of the contact angles.

  15. A robust human face detection algorithm

    Science.gov (United States)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  16. Optimal Design of a Centrifugal Compressor Impeller Using Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Soo-Yong Cho

    2012-01-01

    Full Text Available An optimization study was conducted on a centrifugal compressor. Eight design variables were chosen from the control points for the Bezier curves which widely influenced the geometric variation; four design variables were selected to optimize the flow passage between the hub and the shroud, and other four design variables were used to improve the performance of the impeller blade. As an optimization algorithm, an artificial neural network (ANN was adopted. Initially, the design of experiments was applied to set up the initial data space of the ANN, which was improved during the optimization process using a genetic algorithm. If a result of the ANN reached a higher level, that result was re-calculated by computational fluid dynamics (CFD and was applied to develop a new ANN. The prediction difference between the ANN and CFD was consequently less than 1% after the 6th generation. Using this optimization technique, the computational time for the optimization was greatly reduced and the accuracy of the optimization algorithm was increased. The efficiency was improved by 1.4% without losing the pressure ratio, and Pareto-optimal solutions of the efficiency versus the pressure ratio were obtained through the 21st generation.

  17. An algorithm for 3D target scatterer feature estimation from sparse SAR apertures

    Science.gov (United States)

    Jackson, Julie Ann; Moses, Randolph L.

    2009-05-01

    We present an algorithm for extracting 3D canonical scattering features from complex targets observed over sparse 3D SAR apertures. The algorithm begins with complex phase history data and ends with a set of geometrical features describing the scene. The algorithm provides a pragmatic approach to initialization of a nonlinear feature estimation scheme, using regularization methods to deconvolve the point spread function and obtain sparse 3D images. Regions of high energy are detected in the sparse images, providing location initializations for scattering center estimates. A single canonical scattering feature, corresponding to a geometric shape primitive, is fit to each region via nonlinear optimization of fit error between the regularized data and parametric canonical scattering models. Results of the algorithm are presented using 3D scattering prediction data of a simple scene for both a densely-sampled and a sparsely-sampled SAR measurement aperture.

  18. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs

    Directory of Open Access Journals (Sweden)

    Vaughn Matthew

    2010-11-01

    Full Text Available Abstract Background Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ messages (Σ being the size of the alphabet. Results In this paper we present a Θ(n/p time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/BBlog(M/B (M being the main memory size and B being the size of the disk block. We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster - both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. Conclusions The bi

  19. Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.

    Science.gov (United States)

    Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal

    2010-11-15

    Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for

  20. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  1. A method for optimizing the performance of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Frank

    2006-07-01

    lower bounds, or they can be required to assume certain values. The optimization problem makes it possible to optimize virtually any aspect of the building performance; however, the primary focus of this study is on energy consumption, economy, and indoor environment. The performance measures regarding the energy and indoor environment are calculated using existing simulation software, with minor modifications. The cost of constructing the building is calculating using unit prices for construction jobs, which can be found in price catalogues. Simple algebraic expressions are used as models for these prices. The model parameters are found by using data-fitting. In order to solve the optimization problem formulated earlier, a gradient-free sequential quadratic programming (SQP) filter algorithm is proposed. The algorithm does not require information about the first partial derivatives of the functions that define the optimization problem. This means that techniques such as using finite difference approximations can be avoided, which reduces the time needed for solving the optimization problem. Furthermore, the algorithm uses so-called domain constraint functions in order to ensure that the input to the simulation software is feasible. Using this technique avoids performing time-consuming simulations for unrealistic design decisions. The algorithm is evaluated by applying it to a set of test problems with known solutions. The results indicate that the algorithm converges fast and in a stable manner, as long as there are no active domain constraints. In this case, convergence is either deteriorated or prevented. This case is described in the thesis. The proposed building optimization method uses the gradient-free SQP filter algorithm in order to solve the formulated optimization problem, which involves performance measures that are calculated using simulation software for buildings. The method is tested by applying it to a building design problem involving an office

  2. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  3. Design optimization of cold-formed steel portal frames taking into account the effect of building topology

    Science.gov (United States)

    Phan, Duoc T.; Lim, James B. P.; Sha, Wei; Siew, Calvin Y. M.; Tanyimboh, Tiku T.; Issa, Honar K.; Mohammad, Fouad A.

    2013-04-01

    Cold-formed steel portal frames are a popular form of construction for low-rise commercial, light industrial and agricultural buildings with spans of up to 20 m. In this article, a real-coded genetic algorithm is described that is used to minimize the cost of the main frame of such buildings. The key decision variables considered in this proposed algorithm consist of both the spacing and pitch of the frame as continuous variables, as well as the discrete section sizes. A routine taking the structural analysis and frame design for cold-formed steel sections is embedded into a genetic algorithm. The results show that the real-coded genetic algorithm handles effectively the mixture of design variables, with high robustness and consistency in achieving the optimum solution. All wind load combinations according to Australian code are considered in this research. Results for frames with knee braces are also included, for which the optimization achieved even larger savings in cost.

  4. Geometrically engineering the standard model: Locally unfolding three families out of E8

    International Nuclear Information System (INIS)

    Bourjaily, Jacob L.

    2007-01-01

    This paper extends and builds upon the results of [J. L. Bourjaily, arXiv:0704.0444.], in which we described how to use the tools of geometrical engineering to deform geometrically engineered grand unified models into ones with lower symmetry. This top-down unfolding has the advantage that the relative positions of singularities giving rise to the many 'low-energy' matter fields are related by only a few parameters which deform the geometry of the unified model. And because the relative positions of singularities are necessary to compute the superpotential, for example, this is a framework in which the arbitrariness of geometrically engineered models can be greatly reduced. In [J. L. Bourjaily, arXiv:0704.0444.], this picture was made concrete for the case of deforming the representations of an SU 5 model into their standard model content. In this paper we continue that discussion to show how a geometrically engineered 16 of SO 10 can be unfolded into the standard model, and how the three families of the standard model uniquely emerge from the unfolding of a single, isolated E 8 singularity

  5. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    Science.gov (United States)

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  6. DOA Estimation of Cylindrical Conformal Array Based on Geometric Algebra

    Directory of Open Access Journals (Sweden)

    Minjie Wu

    2016-01-01

    Full Text Available Due to the variable curvature of the conformal carrier, the pattern of each element has a different direction. The traditional method of analyzing the conformal array is to use the Euler rotation angle and its matrix representation. However, it is computationally demanding especially for irregular array structures. In this paper, we present a novel algorithm by combining the geometric algebra with Multiple Signal Classification (MUSIC, termed as GA-MUSIC, to solve the direction of arrival (DOA for cylindrical conformal array. And on this basis, we derive the pattern and array manifold. Compared with the existing algorithms, our proposed one avoids the cumbersome matrix transformations and largely decreases the computational complexity. The simulation results verify the effectiveness of the proposed method.

  7. Mechanical Model of Geometric Cell and Topological Algorithm for Cell Dynamics from Single-Cell to Formation of Monolayered Tissues with Pattern

    KAUST Repository

    Kachalo, Sëma

    2015-05-14

    Geometric and mechanical properties of individual cells and interactions among neighboring cells are the basis of formation of tissue patterns. Understanding the complex interplay of cells is essential for gaining insight into embryogenesis, tissue development, and other emerging behavior. Here we describe a cell model and an efficient geometric algorithm for studying the dynamic process of tissue formation in 2D (e.g. epithelial tissues). Our approach improves upon previous methods by incorporating properties of individual cells as well as detailed description of the dynamic growth process, with all topological changes accounted for. Cell size, shape, and division plane orientation are modeled realistically. In addition, cell birth, cell growth, cell shrinkage, cell death, cell division, cell collision, and cell rearrangements are now fully accounted for. Different models of cell-cell interactions, such as lateral inhibition during the process of growth, can be studied in detail. Cellular pattern formation for monolayered tissues from arbitrary initial conditions, including that of a single cell, can also be studied in detail. Computational efficiency is achieved through the employment of a special data structure that ensures access to neighboring cells in constant time, without additional space requirement. We have successfully generated tissues consisting of more than 20,000 cells starting from 2 cells within 1 hour. We show that our model can be used to study embryogenesis, tissue fusion, and cell apoptosis. We give detailed study of the classical developmental process of bristle formation on the epidermis of D. melanogaster and the fundamental problem of homeostatic size control in epithelial tissues. Simulation results reveal significant roles of solubility of secreted factors in both the bristle formation and the homeostatic control of tissue size. Our method can be used to study broad problems in monolayered tissue formation. Our software is publicly

  8. Geometric Model of Induction Heating Process of Iron-Based Sintered Materials

    Science.gov (United States)

    Semagina, Yu V.; Egorova, M. A.

    2018-03-01

    The article studies the issue of building multivariable dependences based on the experimental data. A constructive method for solving the issue is presented in the form of equations of (n-1) – surface compartments of the extended Euclidean space E+n. The dimension of space is taken to be equal to the sum of the number of parameters and factors of the model of the system being studied. The basis for building multivariable dependencies is the generalized approach to n-space used for the surface compartments of 3D space. The surface is designed on the basis of the kinematic method, moving one geometric object along a certain trajectory. The proposed approach simplifies the process aimed at building the multifactorial empirical dependencies which describe the process being investigated.

  9. Computational fluid dynamics simulation of indoor climate in low energy buildings: Computational set up

    Directory of Open Access Journals (Sweden)

    Risberg Daniel

    2017-01-01

    Full Text Available In this paper CFD was used for simulation of the indoor climate in a part of a low energy building. The focus of the work was on investigating the computational set up, such as grid size and boundary conditions in order to solve the indoor climate problems in an accurate way. Future work is to model a complete building, with reasonable calculation time and accuracy. A limited number of grid elements and knowledge of boundary settings are therefore essential. An accurate grid edge size of around 0.1 m was enough to predict the climate according to a grid independency study. Different turbulence models were compared with only small differences in the indoor air velocities and temperatures. The models show that radiation between building surfaces has a large impact on the temperature field inside the building, with the largest differences at the floor level. Simplifying the simulations by modelling the radiator as a surface in the outer wall of the room is appropriate for the calculations. The overall indoor climate is finally compared between three different cases for the outdoor air temperature. The results show a good indoor climate for a low energy building all around the year.

  10. Geometrical parton

    Energy Technology Data Exchange (ETDEWEB)

    Ebata, T [Tohoku Univ., Sendai (Japan). Coll. of General Education

    1976-06-01

    The geometrical distribution inferred from the inelastic cross section is assumed to be proportional to the partial waves. The precocious scaling and the Q/sup 2/-dependence of various quantities are treated from the geometrical point of view. It is shown that the approximate conservation of the orbital angular momentum may be a very practical rule to understand the helicity structure of various hadronic and electromagnetic reactions. The rule can be applied to inclusive reactions as well. The model is also applied to large angle processes. Through the discussion, it is suggested that many peculiar properties of the quark-parton can be ascribed to the geometrical effects.

  11. Geometrical Modification of Learning Vector Quantization Method for Solving Classification Problems

    Directory of Open Access Journals (Sweden)

    Korhan GÜNEL

    2016-09-01

    Full Text Available In this paper, a geometrical scheme is presented to show how to overcome an encountered problem arising from the use of generalized delta learning rule within competitive learning model. It is introduced a theoretical methodology for describing the quantization of data via rotating prototype vectors on hyper-spheres.The proposed learning algorithm is tested and verified on different multidimensional datasets including a binary class dataset and two multiclass datasets from the UCI repository, and a multiclass dataset constructed by us. The proposed method is compared with some baseline learning vector quantization variants in literature for all domains. Large number of experiments verify the performance of our proposed algorithm with acceptable accuracy and macro f1 scores.

  12. An Assisted Workflow for the Early Design of Nearly Zero Emission Healthcare Buildings

    Directory of Open Access Journals (Sweden)

    Hassan A. Sleiman

    2017-07-01

    Full Text Available Energy efficiency in buildings is one of the main goals of many governmental policies due to their high impact on the carbon dioxide emissions in Europe. One of these targets is to reduce the energy consumption in healthcare buildings, which are known to be among the most energy-demanding building types. Although design decisions made at early design phases have a significant impact on the energy performance of the realized buildings, only a small portion of possible early designs is analyzed, which does not ensure an optimal building design. We propose an automated early design support workflow, accompanied by a set of tools, for achieving nearly zero emission healthcare buildings. It is intended to be used by decision makers during the early design phase. It starts with the user-defined brief and the design rules, which are the input for the Early Design Configurator (EDC. The EDC generates multiple design alternatives following an evolutionary algorithm while trying to satisfy user requirements and geometric constraints. The generated alternatives are then validated by means of an Early Design Validator (EDV, and then, early energy and cost assessments are made using two early assessment tools. A user-friendly dashboard is used to guide the user and to illustrate the workflow results, whereas the chosen alternative at the end of the workflow is considered as the starting point for the next design phases. Our proposal has been implemented using Building Information Models (BIM and validated by means of a case study on a healthcare building and several real demonstrations from different countries in the context of the European project STREAMER.

  13. Segmentation of Shadowed Buildings in Dense Urban Areas from Aerial Photographs

    Directory of Open Access Journals (Sweden)

    Junichi Susaki

    2012-03-01

    Full Text Available Segmentation of buildings in urban areas, especially dense urban areas, by using remotely sensed images is highly desirable. However, segmentation results obtained by using existing algorithms are unsatisfactory because of the unclear boundaries between buildings and the shadows cast by neighboring buildings. In this paper, an algorithm is proposed that successfully segments buildings from aerial photographs, including shadowed buildings in dense urban areas. To handle roofs having rough textures, digital numbers (DNs are quantized into several quantum values. Quantization using several interval widths is applied during segmentation, and for each quantization, areas with homogeneous values are labeled in an image. Edges determined from the homogeneous areas obtained at each quantization are then merged, and frequently observed edges are extracted. By using a “rectangular index”, regions whose shapes are close to being rectangular are thus selected as buildings. Experimental results show that the proposed algorithm generates more practical segmentation results than an existing algorithm does. Therefore, the main factors in successful segmentation of shadowed roofs are (1 combination of different quantization results, (2 selection of buildings according to the rectangular index, and (3 edge completion by the inclusion of non-edge pixels that have a high probability of being edges. By utilizing these factors, the proposed algorithm optimizes the spatial filtering scale with respect to the size of building roofs in a locality. The proposed algorithm is considered to be useful for conducting building segmentation for various purposes.

  14. Fast algorithm of track reconstruction for the Delphy TPC

    International Nuclear Information System (INIS)

    Maillard, J.

    1984-01-01

    We describe a simple geometrical method (polar inversion) to reconstruct tracks. When the magnetic field is constant in magnitude and direction. This method uses geometrical properties of the trajectories. In the case of the DELPHI apparatus, the track reconstruction is done using TPC informations. After explaining the algorithm, we give results on ''GEANT'' simulated events using the ''Lund'' generator. Today we get a computer time of the order of 1.2 milliseconds on a CDC 7600 and an efficiency of 98% [fr

  15. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    Science.gov (United States)

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  16. Assessment of radon build up pattern in a closed room with minimal ventilation disturbance

    International Nuclear Information System (INIS)

    Singh, M.K.; Patnaik, R.L.; Jha, V.N.; Ravi, P.M.; Tripathi, R.M.

    2016-01-01

    Radon is ubiquitous in nature. The immediate source of Radon is 226 Ra is present in building materials underneath earth due to presence of natural uranium in terrestrial region. 222 Ra gas continuously diffused out into room atmosphere through the pores, cracks and fissures if any. The buildup of this 222 Rn is anticipated in a closed room lack of proper ventilation. A study was done to see the build up pattern of radon concentration by two different measurement techniques in a closed room of ESL, Jaduguda. Present paper summarizes the result of buildup study of 222 Rn in a closed room of lab for a period of 3 months

  17. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul [Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, Korea 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Department of Radiation Oncology, Asan Medical Center, Seoul, 138-736 (Korea, Republic of); Department of Biomedical Engineering, College of Medicine, Catholic University of Korea, Seoul, 131-700 and Research Institute of Biomedical Engineering, Catholic University of Korea, Seoul, 131-700 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States) and Radiation Physics Laboratory, Sydney Medical School, University of Sydney, 2006 (Australia)

    2011-07-15

    Purpose: In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. Methods: The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a {gamma}-test with a 3%/3 mm criterion. Results: The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the {gamma}-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation

  18. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    Science.gov (United States)

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of

  19. The Building Game: From Enumerative Combinatorics to Conformational Diffusion

    Science.gov (United States)

    Johnson-Chyzhykov, Daniel; Menon, Govind

    2016-08-01

    We study a discrete attachment model for the self-assembly of polyhedra called the building game. We investigate two distinct aspects of the model: (i) enumerative combinatorics of the intermediate states and (ii) a notion of Brownian motion for the polyhedral linkage defined by each intermediate that we term conformational diffusion. The combinatorial configuration space of the model is computed for the Platonic, Archimedean, and Catalan solids of up to 30 faces, and several novel enumerative results are generated. These represent the most exhaustive computations of this nature to date. We further extend the building game to include geometric information. The combinatorial structure of each intermediate yields a systems of constraints specifying a polyhedral linkage and its moduli space. We use a random walk to simulate a reflected Brownian motion in each moduli space. Empirical statistics of the random walk may be used to define the rates of transition for a Markov process modeling the process of self-assembly.

  20. 3D Monitoring of Buildings Using TerraSAR-X InSAR, DInSAR and PolSAR Capacities

    Directory of Open Access Journals (Sweden)

    Flora Weissgerber

    2017-09-01

    Full Text Available The rapid expansion of cities increases the need of urban remote sensing for a large scale monitoring. This paper provides greater understanding of how TerraSAR-X (TSX high-resolution abilities enable to reach the spatial precision required to monitor individual buildings, through the use of a 4 year temporal stack of 100 images over Paris (France. Three different SAR modes are investigated for this purpose. First a method involving a whole time-series is proposed to measure realistic heights of buildings. Then, we show that the small wavelength of TSX makes the interferometric products very sensitive to the ordinary building-deformation, and that daily deformation can be measured over the entire building with a centimetric accuracy, and without any a priori on the deformation evolution, even when neglecting the impact of the atmosphere. Deformations up to 4 cm were estimated for the Eiffel Tower and up to 1 cm for other lower buildings. These deformations were analyzed and validated with weather and in situ local data. Finally, four TSX polarimetric images were used to investigate geometric and dielectric properties of buildings under the deterministic framework. Despite of the resolution loss of this mode, the possibility to estimate the structural elements of a building orientations and their relative complexity in the spatial organization are demonstrated.

  1. The effect of electron collimator leaf shape on the build-up dose in narrow electron MLC fields

    International Nuclear Information System (INIS)

    Vatanen, T; Vaeaenaenen, A; Lahtinen, T; Traneus, E

    2009-01-01

    Previously, we have found that the build-up dose from abutting narrow electron beams formed with unfocussed electron multi-leaf collimator (eMLC) steal leaves was higher than with the respective open field. To investigate more closely the effect of leaf material and shape on dose in the build-up region, straight, round (radius 1.5 cm) and leaf ends with a different front face angle of α (leaf front face pointing towards the beam axis at an angle of 90 - α) made of steel, brass and tungsten were modelled using the BEAMnrc code. Based on a treatment head simulation of a Varian 2100 C/D linac, depth-dose curves and profiles in water were calculated for narrow 6, 12 and 20 MeV eMLC beams (width 1.0 cm, length 10 cm) at source-to-surface distances (SSD) of 102 and 105 cm. The effects of leaf material and front face angle were evaluated based on electron fluence, angle and energy spectra. With a leaf front face angle of 15 deg., the dose in the build-up region of the 6 MeV field varied between 91 and 100%, while for straight and round leaf shapes the dose varied between 89 and 100%. The variation was between 94 and 100% for 12 and 20 MeV. For abutting narrow 6 MeV fields with total field size 5 x 10 cm 2 , the build-up doses at 5 mm depth for the face angle 15 deg. and straight and round leaf shapes were 96% and 86% (SSD 102 cm) and 89% and 85% (SSD 105 cm). With higher energies, the effect of eMLC leaf shape on dose at 5 mm was slight (3-4% units with 12 MeV) and marginal with 20 MeV. The fluence, energy and angle spectra for total and leaf scattered electrons were practically the same for different leaf materials with 6 MeV. With high energies, the spectra for tungsten were more peaked due to lower leaf transmission. Compared with straight leaf ends, the face angle of 15 deg. and round leaf ends led to a 1 mm (for 6 MeV) and between 1 and 5 mm (12 and 20 MeV at a SSD of 105 cm) decrease of therapeutic range and increase of the field size, respectively. However

  2. High performance dosimetry calculations using adapted ray-tracing

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Saupin, Guillaume

    2010-01-01

    When preparing interventions on nuclear sites, it is interesting to study different scenarios, to identify the most appropriate one for the operator(s). Using virtual reality tools is a good way to simulate the potential scenarios. Thus, taking advantage of very efficient computation times can help the user studying different complex scenarios, by immediately evaluating the impact of any changes. In the field of radiation protection, people often use computation codes based on the straight line attenuation method with build-up factors. As for other approaches, geometrical computations (finding all the interactions between radiation rays and the scene objects) remain the bottleneck of the simulation. We present in this paper several optimizations used to speed up these geometrical computations, using innovative GPU ray-tracing algorithms. For instance, we manage to compute every intersection between 600 000 rays and a huge 3D industrial scene in a fraction of second. Moreover, our algorithm works the same way for both static and dynamic scenes, allowing easier study of complex intervention scenarios (where everything moves: the operator(s), the shielding objects, the radiation sources).

  3. Alice and Bob meet Banach the interface of asymptotic geometric analysis and quantum information theory

    CERN Document Server

    Aubrun, Guillaume

    2017-01-01

    The quest to build a quantum computer is arguably one of the major scientific and technological challenges of the twenty-first century, and quantum information theory (QIT) provides the mathematical framework for that quest. Over the last dozen or so years, it has become clear that quantum information theory is closely linked to geometric functional analysis (Banach space theory, operator spaces, high-dimensional probability), a field also known as asymptotic geometric analysis (AGA). In a nutshell, asymptotic geometric analysis investigates quantitative properties of convex sets, or other geometric structures, and their approximate symmetries as the dimension becomes large. This makes it especially relevant to quantum theory, where systems consisting of just a few particles naturally lead to models whose dimension is in the thousands, or even in the billions. Alice and Bob Meet Banach is aimed at multiple audiences connected through their interest in the interface of QIT and AGA: at quantum information resea...

  4. Fracture mechanics of hydroxyapatite single crystals under geometric confinement.

    Science.gov (United States)

    Libonati, Flavia; Nair, Arun K; Vergani, Laura; Buehler, Markus J

    2013-04-01

    Geometric confinement to the nanoscale, a concept that refers to the characteristic dimensions of structural features of materials at this length scale, has been shown to control the mechanical behavior of many biological materials or their building blocks, and such effects have also been suggested to play a crucial role in enhancing the strength and toughness of bone. Here we study the effect of geometric confinement on the fracture mechanism of hydroxyapatite (HAP) crystals that form the mineralized phase in bone. We report a series of molecular simulations of HAP crystals with an edge crack on the (001) plane under tensile loading, and we systematically vary the sample height whilst keeping the sample and the crack length constant. We find that by decreasing the sample height the stress concentration at the tip of the crack disappears for samples with a height smaller than 4.15nm, below which the material shows a different failure mode characterized by a more ductile mechanism with much larger failure strains, and the strength approaching that of a flaw-less crystal. This study directly confirms an earlier suggestion of a flaw-tolerant state that appears under geometric confinement and may explain the mechanical stability of the reinforcing HAP platelets in bone. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Optimization of Pressurizer Based on Genetic-Simplex Algorithm

    International Nuclear Information System (INIS)

    Wang, Cheng; Yan, Chang Qi; Wang, Jian Jun

    2014-01-01

    Pressurizer is one of key components in nuclear power system. It's important to control the dimension in the design of pressurizer through optimization techniques. In this work, a mathematic model of a vertical electric heating pressurizer was established. A new Genetic-Simplex Algorithm (GSA) that combines genetic algorithm and simplex algorithm was developed to enhance the searching ability, and the comparison among modified and original algorithms is conducted by calculating the benchmark function. Furthermore, the optimization design of pressurizer, taking minimization of volume and net weight as objectives, was carried out considering thermal-hydraulic and geometric constraints through GSA. The results indicate that the mathematical model is agreeable for the pressurizer and the new algorithm is more effective than the traditional genetic algorithm. The optimization design shows obvious validity and can provide guidance for real engineering design

  6. Geometric Design Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose: The mission of the Geometric Design Laboratory (GDL) is to support the Office of Safety Research and Development in research related to the geometric design...

  7. An Integrated Simplification Approach for 3D Buildings with Sloped and Flat Roofs

    Directory of Open Access Journals (Sweden)

    Jinghan Xie

    2016-07-01

    Full Text Available Simplification of three-dimensional (3D buildings is critical to improve the efficiency of visualizing urban environments while ensuring realistic urban scenes. Moreover, it underpins the construction of multi-scale 3D city models (3DCMs which could be applied to study various urban issues. In this paper, we design a generic yet effective approach for simplifying 3D buildings. Instead of relying on both semantic information and geometric information, our approach is based solely on geometric information as many 3D buildings still do not include semantic information. In addition, it provides an integrated means to treat 3D buildings with either sloped or flat roofs. The two case studies, one exploring simplification of individual 3D buildings at varying levels of complexity while the other, investigating the multi-scale simplification of a cityscape, show the effectiveness of our approach.

  8. Multi-objective optimization of a plate and frame heat exchanger via genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Hamidreza; Najafi, Behzad [K. N. Toosi University of Technology, Department of Mechanical Engineering, Tehran (Iran)

    2010-06-15

    In the present paper, a plate and frame heat exchanger is considered. Multi-objective optimization using genetic algorithm is developed in order to obtain a set of geometric design parameters, which lead to minimum pressure drop and the maximum overall heat transfer coefficient. Vividly, considered objective functions are conflicting and no single solution can satisfy both objectives simultaneously. Multi-objective optimization procedure yields a set of optimal solutions, called Pareto front, each of which is a trade-off between objectives and can be selected by the user, regarding the application and the project's limits. The presented work takes care of numerous geometric parameters in the presence of logical constraints. A sensitivity analysis is also carried out to study the effects of different geometric parameters on the considered objective functions. Modeling the system and implementing the multi-objective optimization via genetic algorithm has been performed by MATLAB. (orig.)

  9. An algorithm for determining the rotation count of pulsars

    Science.gov (United States)

    Freire, Paulo C. C.; Ridolfi, Alessandro

    2018-06-01

    We present here a simple, systematic method for determining the correct global rotation count of a radio pulsar; an essential step for the derivation of an accurate phase-coherent ephemeris. We then build on this method by developing a new algorithm for determining the global rotational count for pulsars with sparse timing data sets. This makes it possible to obtain phase-coherent ephemerides for pulsars for which this has been impossible until now. As an example, we do this for PSR J0024-7205aa, an extremely faint Millisecond pulsar (MSP) recently discovered in the globular cluster 47 Tucanae. This algorithm has the potential to significantly reduce the number of observations and the amount of telescope time needed to follow up on new pulsar discoveries.

  10. Geometrical themes inspired by the n-body problem

    CERN Document Server

    Herrera, Haydeé; Herrera, Rafael

    2018-01-01

    Presenting a selection of recent developments in geometrical problems inspired by the N-body problem, these lecture notes offer a variety of approaches to study them, ranging from variational to dynamical, while developing new insights, making geometrical and topological detours, and providing historical references. A. Guillot’s notes aim to describe differential equations in the complex domain, motivated by the evolution of N particles moving on the plane subject to the influence of a magnetic field. Guillot studies such differential equations using different geometric structures on complex curves (in the sense of W. Thurston) in order to find isochronicity conditions.   R. Montgomery’s notes deal with a version of the planar Newtonian three-body equation. Namely, he investigates the problem of whether every free homotopy class is realized by a periodic geodesic. The solution involves geometry, dynamical systems, and the McGehee blow-up. A novelty of the approach is the use of energy-balance in order t...

  11. Imaging Formation Algorithm of the Ground and Space-Borne Hybrid BiSAR Based on Parameters Estimation from Direct Signal

    Directory of Open Access Journals (Sweden)

    Qingjun Zhang

    2014-01-01

    Full Text Available This paper proposes a novel image formation algorithm for the bistatic synthetic aperture radar (BiSAR with the configuration of a noncooperative transmitter and a stationary receiver in which the traditional imaging algorithm failed because the necessary imaging parameters cannot be estimated from the limited information from the noncooperative data provider. In the new algorithm, the essential parameters for imaging, such as squint angle, Doppler centroid, and Doppler chirp-rate, will be estimated by full exploration of the recorded direct signal (direct signal is the echo from satellite to stationary receiver directly from the transmitter. The Doppler chirp-rate is retrieved by modeling the peak phase of direct signal as a quadratic polynomial. The Doppler centroid frequency and the squint angle can be derived from the image contrast optimization. Then the range focusing, the range cell migration correction (RCMC, and the azimuth focusing are implemented by secondary range compression (SRC and the range cell migration, respectively. At last, the proposed algorithm is validated by imaging of the BiSAR experiment configured with china YAOGAN 10 SAR as the transmitter and the receiver platform located on a building at a height of 109 m in Jiangsu province. The experiment image with geometric correction shows good accordance with local Google images.

  12. Scaling and allometry in the building geometries of Greater London

    Science.gov (United States)

    Batty, M.; Carvalho, R.; Hudson-Smith, A.; Milton, R.; Smith, D.; Steadman, P.

    2008-06-01

    Many aggregate distributions of urban activities such as city sizes reveal scaling but hardly any work exists on the properties of spatial distributions within individual cities, notwithstanding considerable knowledge about their fractal structure. We redress this here by examining scaling relationships in a world city using data on the geometric properties of individual buildings. We first summarise how power laws can be used to approximate the size distributions of buildings, in analogy to city-size distributions which have been widely studied as rank-size and lognormal distributions following Zipf [ Human Behavior and the Principle of Least Effort (Addison-Wesley, Cambridge, 1949)] and Gibrat [ Les Inégalités Économiques (Librarie du Recueil Sirey, Paris, 1931)]. We then extend this analysis to allometric relationships between buildings in terms of their different geometric size properties. We present some preliminary analysis of building heights from the Emporis database which suggests very strong scaling in world cities. The data base for Greater London is then introduced from which we extract 3.6 million buildings whose scaling properties we explore. We examine key allometric relationships between these different properties illustrating how building shape changes according to size, and we extend this analysis to the classification of buildings according to land use types. We conclude with an analysis of two-point correlation functions of building geometries which supports our non-spatial analysis of scaling.

  13. A new approach for handling longitudinal count data with zero-inflation and overdispersion: poisson geometric process model.

    Science.gov (United States)

    Wan, Wai-Yin; Chan, Jennifer S K

    2009-08-01

    For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).

  14. Segmentation of Shadowed Buildings in Dense Urban Areas from Aerial Photographs

    OpenAIRE

    Susaki, Junichi

    2012-01-01

    Segmentation of buildings in urban areas, especially dense urban areas, by using remotely sensed images is highly desirable. However, segmentation results obtained by using existing algorithms are unsatisfactory because of the unclear boundaries between buildings and the shadows cast by neighboring buildings. In this paper, an algorithm is proposed that successfully segments buildings from aerial photographs, including shadowed buildings in dense urban areas. To handle roofs having rough text...

  15. Building up the standard gauge model of high energy physics. 11

    International Nuclear Information System (INIS)

    Rajasekaran, G.

    1989-01-01

    This chapter carefully builds up, step by step, the standard gauge model of particle physics based on the group SU(3) c x SU(2) x U(1). Spontaneous symmetry breaking via the Nambu-Goldstone mode, and then via the Higgs mode for gauge theories, are presented via examples, first for the Abelian U(1) and then for the non-Abelian SU(2) case. The physically interesting SU(2) x U(1) model is then taken up. The emergence of massive vector bosons is demonstrated. After this preparation, the 'standard model' of the late 60's prior to the gauge theory revolution, based on the V-A current-current weak interactions, minimal electromagnetism, and an unspecified strong interaction, all in quark-lepton language, is set up. It is then compared to the standard gauge model of SU(3) c x SU(2) x U(1). The compelling reasons for QCD as the gauge theory of strong interactions are spelt out. An introduction to renormalization group methods as the main calculational tool for QCD, asymptotic freedom, infrared problems, and physically motivated reasons for going beyond the standard model are presented. (author). 6 refs.; 19 figs.; 2 tabs

  16. Exact computation of the Voronoi Diagram of spheres in 3D, its topology and its geometric invariants

    DEFF Research Database (Denmark)

    Anton, François; Mioc, Darka; Santos, Marcelo

    2011-01-01

    In this paper, we are addressing the exact computation of the Delaunay graph (or quasi-triangulation) and the Voronoi diagram of spheres using Wu’s algorithm. Our main contribution is first a methodology for automated derivation of invariants of the Delaunay empty circumcircle predicate for spheres...... and the Voronoi vertex of four spheres, then the application of this methodology to get all geometrical invariants that intervene in this problem and the exact computation of the Delaunay graph and the Voronoi diagram of spheres. To the best of our knowledge, there does not exist a comprehensive treatment...... of the exact computation with geometrical invariants of the Delaunay graph and the Voronoi diagram of spheres. Starting from the system of equations defining the zero-dimensional algebraic set of the problem, we are following Wu’s algorithm to transform the initial system into an equivalent Wu characteristic...

  17. TOWARDS AN INTEGRATION OF GIS AND BIM DATA: WHAT ARE THE GEOMETRIC AND TOPOLOGICAL ISSUES?

    Directory of Open Access Journals (Sweden)

    K. Arroyo Ohori

    2017-10-01

    Full Text Available Geographic information and building information modelling both model buildings and infrastructure, but the way in which they are modelled is usually complimentary and BIM-GIS integration is widely considered as a way forward for both domains. For one, more detailed BIM data can feed more general GIS data and GIS data can provide the context that is necessary to BIM data. While previous studies have focused on the theoretical aspects of such an integration at a schema level, in this paper we focus on explaining the geometric and topological issues we have found while trying to develop software to realise such an integration in practice and at a data level. In our preliminary results, which are presented here, we have found that many issues for such an integration remain: handling the geometric and topological problems in BIM models, dealing with bad georeferencing and figuring out the best way to convert data between IFC and CityGML are all open issues.

  18. Towards AN Integration of GIS and Bim Data: what are the Geometric and Topological Issues?

    Science.gov (United States)

    Arroyo Ohori, K.; Biljecki, F.; Diakité, A.; Krijnen, T.; Ledoux, H.; Stoter, J.

    2017-10-01

    Geographic information and building information modelling both model buildings and infrastructure, but the way in which they are modelled is usually complimentary and BIM-GIS integration is widely considered as a way forward for both domains. For one, more detailed BIM data can feed more general GIS data and GIS data can provide the context that is necessary to BIM data. While previous studies have focused on the theoretical aspects of such an integration at a schema level, in this paper we focus on explaining the geometric and topological issues we have found while trying to develop software to realise such an integration in practice and at a data level. In our preliminary results, which are presented here, we have found that many issues for such an integration remain: handling the geometric and topological problems in BIM models, dealing with bad georeferencing and figuring out the best way to convert data between IFC and CityGML are all open issues.

  19. Radiation Build-Up Of High Energy Gamma In Shielding Of High Atomic Number

    International Nuclear Information System (INIS)

    Yuliati, Helfi; Akhadi, Mukhlis

    2000-01-01

    Research to observe effect of radiation build-up factor (b) in iron (Fe) and lead (Pb) for high energy gamma shielding from exp.137 Cs (E gamma : 662 keV) and exp.60 Co (E gamma : 1332 keV) sources has been carried out. Research was conducted bt counting of radiation intensity behind shielding with its thickness vary from 1 to 5 times of half value thickness (HVT). NaI (TI) detector which connected to multi channel analyzer (MCA) was used for the counting. Calculation result show that all of b value are near to 1 (b∼1) both for Fe and Pb. Without inserting b in calculation, from the experiment it was obtained HVT value of Fe for high gamma radiation of 662 and 1332 keV were : (12,94 n 0,03) mm and (17,33 n 0,01) mm with their deviation standards were 0,2% and 0,06% respectively. Value of HVT for Pb with the same energy were : (6,31 n 0,03) mm and (11,86 n 0,03) mm with their deviation standars were : 0,48% and 0,25% respectively. HVL concept could be applied directly to estimate shielding thickness of high atomic number of high energy gamma radiation, without inserting correction of radiation build-up factor

  20. An introduction to geometric computation

    International Nuclear Information System (INIS)

    Nievergelt, J.

    1991-01-01

    Computational geometry has some appealing features that make it ideal for learning about algorithms and data structures, namely the problem statements are easily understood, intuitively meaningful, and mathematically rigorous, problem statement, solution, and every step of the construction have natural visual representations that support abstract thinking and help in detecting errors of reasoning, and finally, these algorithms are practical because is easy to come up with examples where they can be applied. Figs

  1. Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems

    Energy Technology Data Exchange (ETDEWEB)

    Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj; Haglin, David J.

    2012-07-03

    We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.

  2. Forecasting building energy consumption with hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kangji [Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027 (China); School of Electricity Information Engineering, Jiangsu University, Zhenjiang 212013 (China); Su, Hongye [Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027 (China)

    2010-11-15

    There are several ways to forecast building energy consumption, varying from simple regression to models based on physical principles. In this paper, a new method, namely, the hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system (GA-HANFIS) model is developed. In this model, hierarchical structure decreases the rule base dimension. Both clustering and rule base parameters are optimized by GAs and neural networks (NNs). The model is applied to predict a hotel's daily air conditioning consumption for a period over 3 months. The results obtained by the proposed model are presented and compared with regular method of NNs, which indicates that GA-HANFIS model possesses better performance than NNs in terms of their forecasting accuracy. (author)

  3. ROAD DETECTION BY NEURAL AND GENETIC ALGORITHM IN URBAN ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    A. Barsi

    2012-07-01

    Full Text Available In the urban object detection challenge organized by the ISPRS WG III/4 high geometric and radiometric resolution aerial images about Vaihingen/Stuttgart, Germany are distributed. The acquired data set contains optical false color, near infrared images and airborne laserscanning data. The presented research focused exclusively on the optical image, so the elevation information was ignored. The road detection procedure has been built up of two main phases: a segmentation done by neural networks and a compilation made by genetic algorithms. The applied neural networks were support vector machines with radial basis kernel function and self-organizing maps with hexagonal network topology and Euclidean distance function for neighborhood management. The neural techniques have been compared by hyperbox classifier, known from the statistical image classification practice. The compilation of the segmentation is realized by a novel application of the common genetic algorithm and by differential evolution technique. The genes were implemented to detect the road elements by evaluating a special binary fitness function. The results have proven that the evolutional technique can automatically find major road segments.

  4. A Decision Processing Algorithm for CDC Location Under Minimum Cost SCM Network

    Science.gov (United States)

    Park, N. K.; Kim, J. Y.; Choi, W. Y.; Tian, Z. M.; Kim, D. J.

    Location of CDC in the matter of network on Supply Chain is becoming on the high concern these days. Present status of methods on CDC has been mainly based on the calculation manually by the spread sheet to achieve the goal of minimum logistics cost. This study is focused on the development of new processing algorithm to overcome the limit of present methods, and examination of the propriety of this algorithm by case study. The algorithm suggested by this study is based on the principle of optimization on the directive GRAPH of SCM model and suggest the algorithm utilizing the traditionally introduced MST, shortest paths finding methods, etc. By the aftermath of this study, it helps to assess suitability of the present on-going SCM network and could be the criterion on the decision-making process for the optimal SCM network building-up for the demand prospect in the future.

  5. The effect of photometric and geometric context on photometric and geometric lightness effects.

    Science.gov (United States)

    Lee, Thomas Y; Brainard, David H

    2014-01-24

    We measured the lightness of probe tabs embedded at different orientations in various contextual images presented on a computer-controlled stereo display. Two background context planes met along a horizontal roof-like ridge. Each plane was a graphic rendering of a set of achromatic surfaces with the simulated illumination for each plane controlled independently. Photometric context was varied by changing the difference in simulated illumination intensity between the two background planes. Geometric context was varied by changing the angle between them. We parsed the data into separate photometric effects and geometric effects. For fixed geometry, varying photometric context led to linear changes in both the photometric and geometric effects. Varying geometric context did not produce a statistically reliable change in either the photometric or geometric effects.

  6. A guidance and control algorithm for scent tracking micro-robotic vehicle swarms

    International Nuclear Information System (INIS)

    Dohner, J.L.

    1998-03-01

    Cooperative micro-robotic scent tracking vehicles are designed to collectively sniff out locations of high scent concentrations in unknown, geometrically complex environments. These vehicles are programmed with guidance and control algorithms that allow inter cooperation among vehicles. In this paper a cooperative guidance and control algorithm for scent tracking micro-robotic vehicles is presented. This algorithm is comprised of a sensory compensation sub-algorithm using point source cancellation, a guidance sub-algorithm using gradient descent tracking, and a control sub-algorithm using proportional feedback. The concepts of social rank and point source cancellation are new concepts introduced within. Simulation results for cooperative vehicles swarms are given. Limitations are discussed

  7. A guidance and control algorithm for scent tracking micro-robotic vehicle swarms

    Energy Technology Data Exchange (ETDEWEB)

    Dohner, J.L. [Sandia National Labs., Albuquerque, NM (United States). Structural Dynamics Dept.

    1998-03-01

    Cooperative micro-robotic scent tracking vehicles are designed to collectively sniff out locations of high scent concentrations in unknown, geometrically complex environments. These vehicles are programmed with guidance and control algorithms that allow inter cooperation among vehicles. In this paper a cooperative guidance and control algorithm for scent tracking micro-robotic vehicles is presented. This algorithm is comprised of a sensory compensation sub-algorithm using point source cancellation, a guidance sub-algorithm using gradient descent tracking, and a control sub-algorithm using proportional feedback. The concepts of social rank and point source cancellation are new concepts introduced within. Simulation results for cooperative vehicles swarms are given. Limitations are discussed.

  8. Geometric feasibility of flexible cask transportation system for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Lima, P; Ribeiro, M I; Aparicio, P [Instituto Superior Tecnico-Instituto de Sistemas e Robotica, Lisboa (Portugal)

    1998-07-01

    One of the remote operations that has to be carried out in the International Thermonuclear Experimental Reactor (ITER) is the transportation of sealed casks between the various ports of the Tokamak Building (TB) and the Hot Cell Building (HCB). The casks may contain different in-vessel components (e.g. blanket modules, divertors) and are designed for a maximum load of about 80 ton. To improve the safety and flexibility of ITER Remote Handling (RH) transport vehicles, the cask is not motorized by itself, but instead, a motorized platform carrying the cask was proposed. This paper addresses the geometric feasibility of the flexible cask transportation system, taking into account the vehicle kinematics. The feasibility issues studied include planning smooth paths to increase safety, the discussion of building constraints by the evaluation of the vehicle spanned areas when following a planned path, and the analysis of the clearance required to remove the platform from underneath the cask at different possible failure locations. Simulation results are presented for the recommended trajectory, the spanned area and the rescue manoeuvres at critical locations along the path. (authors)

  9. Geometric feasibility of flexible cask transportation system for ITER

    International Nuclear Information System (INIS)

    Lima, P.; Ribeiro, M.I.; Aparicio, P.

    1998-01-01

    One of the remote operations that has to be carried out in the International Thermonuclear Experimental Reactor (ITER) is the transportation of sealed casks between the various ports of the Tokamak Building (TB) and the Hot Cell Building (HCB). The casks may contain different in-vessel components (e.g. blanket modules, divertors) and are designed for a maximum load of about 80 ton. To improve the safety and flexibility of ITER Remote Handling (RH) transport vehicles, the cask is not motorized by itself, but instead, a motorized platform carrying the cask was proposed. This paper addresses the geometric feasibility of the flexible cask transportation system, taking into account the vehicle kinematics. The feasibility issues studied include planning smooth paths to increase safety, the discussion of building constraints by the evaluation of the vehicle spanned areas when following a planned path, and the analysis of the clearance required to remove the platform from underneath the cask at different possible failure locations. Simulation results are presented for the recommended trajectory, the spanned area and the rescue manoeuvres at critical locations along the path. (authors)

  10. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Veiga, Catarina, E-mail: catarina.veiga.11@ucl.ac.uk; Royle, Gary [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom); Lourenço, Ana Mónica [Radiation Physics Group, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom and Acoustics and Ionizing Radiation Team, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Mouinuddin, Syed [Department of Radiotherapy, University College London Hospital, London NW1 2BU (United Kingdom); Herk, Marcel van [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam 1066 CX (Netherlands); Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R. [Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom)

    2015-02-15

    voxels within the treated volume failed a 2%pD DD-test (DD{sub 2%-pp}). Larger DD{sub 2%-pp} was found within the high dose gradient (21% ± 6%) and regions where the CBCT quality was poorer (28% ± 9%). The differences when estimating the mean and maximum dose delivered to organs-at-risk were up to 2.0%pD and 2.8%pD, respectively. Conclusions: The authors evaluated several DIR algorithms for CT-to-CBCT registrations. In spite of all methods resulting in comparable geometrical matching, the choice of DIR implementation leads to uncertainties in dose warped, particularly in regions of high gradient and/or poor imaging quality.

  11. Tendency of the 18-8 type corrosion-resistant steel to cracking in automatic building-up of copper and copper base alloys in argon

    International Nuclear Information System (INIS)

    Abramovich, V.R.; Andronik, V.A.

    1978-01-01

    Studied was the tendency of the 18-8 type corrosion-resistant steel to cracking during automatic building-up of copper and bronze in argon. The investigation was carried out on the 0kh18n10t steel in argon. It had been established, that the degree of copper penetration into the steel inceases with the increase in the time of the 0Kh18n10t steel contact with liquid copper. Liquid copper and copper base alloys have a detrimental effect on mechanical properties of the steel under external tensile load during intercontant. It is shown that in building-up of copper base alloys on the steel-0Kh18n10t, tendency of the steel to cracking decreases with increase in stiffness of a surfaced weld metal plate and with decrease in building-up energy per unit length. The causes of macrocracking in steel at building-up non-ferrous metals are explained. The technological procedures to avoid cracking are suggested

  12. Implementation of the geometrical problem in CNC metal cutting machine

    Directory of Open Access Journals (Sweden)

    Erokhin V.V.

    2017-06-01

    Full Text Available The article deals with the tasks of managing the production process (technological process and technological equip-ment, the most detailed analysis of the implementation of the geometric problem in CNC machines. The influence of the solution of the geometric CNC problem on the accuracy of workpiece machining is analyzed by implementing a certain interpolation algorithm and the values of the discreteness of the movements of the working bodies of the CNC machine. The technique of forming a given trajectory of motion of the machine's executive organ is given, by means of which it is required to ensure the coordinated movement of the shaping coordinates according to a certain law, depend-ing on the specified trajectory of the cutting edge of the tool. The advantages and disadvantages of the implementation of interpolation in CNC systems by various methods are considered, and particular attention is paid to combined meth-ods of realizing interpolation.

  13. Computational Contact Mechanics Geometrically Exact Theory for Arbitrary Shaped Bodies

    CERN Document Server

    Konyukhov, Alexander

    2013-01-01

    This book contains a systematical analysis of geometrical situations  leading to  contact pairs -- point-to-surface, surface-to-surface, point-to-curve, curve-to-curve and curve-to-surface.  Each contact pair  is inherited with a special coordinate system based on its geometrical properties such as a Gaussian surface coordinate system or a Serret-Frenet curve coordinate system.  The formulation in a covariant form allows in a straightforward fashion to consider various constitutive relations for a  certain pair such as anisotropy for both frictional and structural parts. Then standard methods well known in computational contact mechanics such as penalty, Lagrange multiplier methods, combination of both and others  are formulated in these coordinate systems. Such formulations require then the powerful apparatus of differential geometry of surfaces and curves as well as of convex analysis. The final goals of such transformations are  then ready-for-implementation numerical algorithms within the finite e...

  14. Geometric model of pseudo-distance measurement in satellite location systems

    Science.gov (United States)

    Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.

    2018-04-01

    The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.

  15. BIM, GIS and semantic models of cultural heritage buildings

    Directory of Open Access Journals (Sweden)

    Pavel Tobiáš

    2016-12-01

    Full Text Available Even though there has been a great development of using building information models in the AEC (Architecture/Engineering/Construction sector recently, creation of models of existing buildings is still not very usual. The cultural heritage documentation is still, in most cases, kept in the form of 2D drawings while these drawings mostly contain only geometry without semantics, attributes or definitions of relationships and hierarchies between particular building elements. All these additional information would, however, be very providential for the tasks of cultural heritage preservation, i.e. for the facility management of heritage buildings or for reconstruction planning and it would be suitable to manage all geometric and non-geometric information in a single 3D information model. This paper is based on the existing literature and focuses on the historic building information modelling to provide information about the current state of the art. First, a summary of available software tools is introduced while not only the BIM tools but also the related GIS software is considered. This is followed by a review of existing efforts worldwide and an evaluation of the facts found.

  16. Machine Learning Classification of Buildings for Map Generalization

    Directory of Open Access Journals (Sweden)

    Jaeeun Lee

    2017-10-01

    Full Text Available A critical problem in mapping data is the frequent updating of large data sets. To solve this problem, the updating of small-scale data based on large-scale data is very effective. Various map generalization techniques, such as simplification, displacement, typification, elimination, and aggregation, must therefore be applied. In this study, we focused on the elimination and aggregation of the building layer, for which each building in a large scale was classified as “0-eliminated,” “1-retained,” or “2-aggregated.” Machine-learning classification algorithms were then used for classifying the buildings. The data of 1:1000 scale and 1:25,000 scale digital maps obtained from the National Geographic Information Institute were used. We applied to these data various machine-learning classification algorithms, including naive Bayes (NB, decision tree (DT, k-nearest neighbor (k-NN, and support vector machine (SVM. The overall accuracies of each algorithm were satisfactory: DT, 88.96%; k-NN, 88.27%; SVM, 87.57%; and NB, 79.50%. Although elimination is a direct part of the proposed process, generalization operations, such as simplification and aggregation of polygons, must still be performed for buildings classified as retained and aggregated. Thus, these algorithms can be used for building classification and can serve as preparatory steps for building generalization.

  17. An optimisation algorithm for determination of treatment margins around moving and deformable targets

    International Nuclear Information System (INIS)

    Redpath, Anthony Thomas; Muren, Ludvig Paul

    2005-01-01

    Purpose: Determining treatment margins for inter-fractional motion of moving and deformable clinical target volumes (CTVs) remains a major challenge. This paper describes and applies an optimisation algorithm designed to derive such margins. Material and methods: The algorithm works by expanding the CTV, as determined from a pre-treatment or planning scan, to enclose the CTV positions observed during treatment. CTV positions during treatment may be obtained using, for example, repeat CT scanning and/or repeat electronic portal imaging (EPI). The algorithm can be applied to both individual patients and to a set of patients. The margins derived will minimise the excess volume outside the envelope that encloses all observed CTV positions (the CTV envelope). Initially, margins are set such that the envelope is more than adequately covered when the planning CTV is expanded. The algorithm uses an iterative method where the margins are sampled randomly and are then either increased or decreased randomly. The algorithm is tested on a set of 19 bladder cancer patients that underwent weekly repeat CT scanning and EPI throughout their treatment course. Results: From repeated runs on individual patients, the algorithm produces margins within a range of ±2 mm that lie among the best results found with an exhaustive search approach, and that agree within 3 mm with margins determined by a manual approach on the same data. The algorithm could be used to determine margins to cover any specified geometrical uncertainty, and allows for the determination of reduced margins by relaxing the coverage criteria, for example disregarding extreme CTV positions, or an arbitrarily selected volume fraction of the CTV envelope, and/or patients with extreme geometrical uncertainties. Conclusion: An optimisation approach to margin determination is found to give reproducible results within the accuracy required. The major advantage with this algorithm is that it is completely empirical, and it is

  18. Optimal Pile Arrangement for Minimizing Excess Pore Water Pressure Build-Up

    DEFF Research Database (Denmark)

    Barari, Amin; Saadati, Meysam; Ibsen, Lars Bo

    2013-01-01

    Numerical analysis of pile group in a liquefiable soil was considered to investigate the influence of pile spacing on excess pore pressure distribution and liquefaction potential. The analysis is conducted using a two-dimensional plain strain finite difference program considering a nonlinear...... constitutive model for sandy soil, strength and stiffness reduction, and pile-soil interaction. The Mohr-Coulomb constitutive model coupled with Byrne pore pressure build-up model have been employed in the analysis. Numerical analysis results show that pile groups have significant influence on the dynamic...... response of sandy soil as they reduce the amount of excess pore pressure development during seismic shaking and may even prevent liquefaction....

  19. The effect of build-up cap materials on the response of an ionization chamber to 60Co gamma rays

    International Nuclear Information System (INIS)

    Rocha, M.P.O.; Almeida, C.E. de

    1993-01-01

    Knowledge of the effect of wall and build-up cap materials on ionization chamber response is necessary to determine absorbed dose in a medium using a calibration factor based on exposure or kerma in air. Attenuation and scattering effects of 60 Co gamma rays in the ionization chamber wall and build-up cap, as well as their non-equivalence to air, were studied with an OFS ionization chamber (Delrin wall) and a set of build-up caps specially built for this purpose. Results for a specific material were plotted as functions of wall and cap total thickness, extrapolated to zero wall thickness, then corrected for mean centre of electron production in the wall (= 0.136 g cm -2 ). Correction factors for a specific thickness were analysed in relation to cap material, and to relative responses compared with values calculated by using AAPM, SEFM and IAEA formalisms for cap effects. A Monte Carlo calculation was performed to compare the experimental and theoretical values. Calculations showed an agreement within 0.1% with experimental values and a wall effect of approximately 1.6%. (Author)

  20. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  1. Geometric manipulation of the quantum states of two-level atoms

    International Nuclear Information System (INIS)

    Tian, Mingzhen; Barber, Zeb W.; Fischer, Joe A.; Babbitt, Wm. Randall

    2004-01-01

    Manipulation of the quantum states of two-level atoms has been investigated using laser-controlled geometric phase change, which has the potential to build robust quantum logic gates for quantum computing. For a qubit based on two electronic transition levels of an atom, two basic quantum operations that can make any universal single qubit gate have been designed employing resonant laser pulses. An operation equivalent to a phase gate has been demonstrated using Tm 3+ doped in a yttrium aluminum garnet crystal

  2. MERGING AIRBORNE LIDAR DATA AND SATELLITE SAR DATA FOR BUILDING CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    T. Yamamoto

    2015-05-01

    Full Text Available A frequent map revision is required in GIS applications, such as disaster prevention and urban planning. In general, airborne photogrammetry and LIDAR measurements are applied to geometrical data acquisition for automated map generation and revision. However, attribute data acquisition and classification depend on manual editing works including ground surveys. In general, airborne photogrammetry and LiDAR measurements are applied to geometrical data acquisition for automated map generation and revision. However, these approaches classify geometrical attributes. Moreover, ground survey and manual editing works are finally required in attribute data classification. On the other hand, although geometrical data extraction is difficult, SAR data have a possibility to automate the attribute data acquisition and classification. The SAR data represent microwave reflections on various surfaces of ground and buildings. There are many researches related to monitoring activities of disaster, vegetation, and urban. Moreover, we have an opportunity to acquire higher resolution data in urban areas with new sensors, such as ALOS2 PALSAR2. Therefore, in this study, we focus on an integration of airborne LIDAR data and satellite SAR data for building extraction and classification.

  3. A study of energy and effective atomic number dependence of the exposure build-up factors in biological samples

    International Nuclear Information System (INIS)

    Sidhu, G.S.; Singh, P.S.; Mudahar, G.S.

    2000-01-01

    A theoretical method is presented to determine the gamma-radiation build-up factors in various biological materials. The gamma energy range is 0.015-15.0 MeV, with penetration depths up to 40 mean free paths considered. The dependence of the exposure build-up factor on incident photon energy and the effective atomic number (Z eff ) has also been assessed. In a practical analysis of dose burden to gamma-irradiated biological materials, the sophistication of Monte Carlo computer techniques would be applied, with associated detailed modelling. However, a feature of the theoretical method presented is its ability to make the consequences of the physics of the scattering process in biological materials more transparent. In addition, it can be quickly employed to give a first-pass dose estimate prior to a more detailed computer study. (author)

  4. Geometric group theory

    CERN Document Server

    Druţu, Cornelia

    2018-01-01

    The key idea in geometric group theory is to study infinite groups by endowing them with a metric and treating them as geometric spaces. This applies to many groups naturally appearing in topology, geometry, and algebra, such as fundamental groups of manifolds, groups of matrices with integer coefficients, etc. The primary focus of this book is to cover the foundations of geometric group theory, including coarse topology, ultralimits and asymptotic cones, hyperbolic groups, isoperimetric inequalities, growth of groups, amenability, Kazhdan's Property (T) and the Haagerup property, as well as their characterizations in terms of group actions on median spaces and spaces with walls. The book contains proofs of several fundamental results of geometric group theory, such as Gromov's theorem on groups of polynomial growth, Tits's alternative, Stallings's theorem on ends of groups, Dunwoody's accessibility theorem, the Mostow Rigidity Theorem, and quasiisometric rigidity theorems of Tukia and Schwartz. This is the f...

  5. Methodology to characterize a residential building stock using a bottom-up approach: a case study applied to Belgium

    Directory of Open Access Journals (Sweden)

    Samuel Gendebien

    2014-06-01

    Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.

  6. Evaluation of Geometrical Modulation Transfer Function in Optical Lens System

    Directory of Open Access Journals (Sweden)

    Cheng-Mu Tsai

    2015-01-01

    Full Text Available This paper presents ray tracing algorithms to evaluate the geometrical modulation transfer function (GMTF of optical lens system. There are two kinds of ray tracings methods that can be applied to help simulate the point spread function (PSF in the image plane, for example, paraxial optics and real ray tracings. The paraxial optics ray tracing is used to calculate the first-order properties such as the effective focal length (EFL and the entrance pupil position through less cost of computation. However, the PSF could have a large tolerance by only using paraxial optics ray tracing for simulation. Some formulas for real ray tracing are applied in the sagittal and tangential line spread function (LSF. The algorithms are developed to demonstrate the simulation of LSF. Finally, the GMTF is evaluated after the fast Fourier transform (FFT of the LSF.

  7. On geometric optics and surface waves for light scattering by spheres

    International Nuclear Information System (INIS)

    Liou, K.N.; Takano, Y.; Yang, P.

    2010-01-01

    A geometric optics approach including surface wave contributions has been developed for homogeneous and concentrically coated spheres. In this approach, a ray-by-ray tracing program was used for efficient computation of the extinction and absorption cross sections. The present geometric-optics surface-wave (GOS) theory for light scattering by spheres considers the surface wave contribution along the edge of a particle as a perturbation term to the geometric-optics core that includes Fresnel reflection-refraction and Fraunhofer diffraction. Accuracies of the GOS approach for spheres have been assessed through comparison with the results determined from the exact Lorenz-Mie (LM) theory in terms of the extinction efficiency, single-scattering albedo, and asymmetry factor in the size-wavelength ratio domain. In this quest, we have selected a range of real and imaginary refractive indices representative of water/ice and aerosol species and demonstrated close agreement between the results computed by GOS and LM. This provides the foundation to conduct physically reliable light absorption and scattering computations based on the GOS approach for aerosol aggregates associated with internal and external mixing states employing spheres as building blocks.

  8. Pixel-based approach for building heights determination by SAR radargrammetry

    Science.gov (United States)

    Dubois, C.; Thiele, A.; Hinz, S.

    2013-10-01

    Numerous advances have been made recently in photogrammetry, laser scanning, and remote sensing for the creation of 3D city models. More and more cities are interested in getting 3D city models, be it for urban planning purposes or for supporting public utility companies. In areas often affected by natural disaster, rapid updating of the 3D information may also be useful for helping rescue forces. The high resolutions that can be achieved by the new spaceborne SAR sensor generation enables the analysis of city areas at building level and make those sensors attractive for the extraction of 3D information. Moreover, they present the advantage of weather and sunlight independency, which make them more practicable than optical data, in particular for tasks where rapid response is required. Furthermore, their short revisit time and the possibility of multi-sensor constellation enable providing several acquisitions within a few hours. This opens up the floor for new applications, especially radargrammetric applications, which consider acquisitions taken under different incidence angles. In this paper, we present a new approach for determining building heights, relying only on the radargrammetric analysis of building layover. By taking into account same-side acquisitions, we present the workflow of building height determination. Focus is set on some geometric considerations, pixel-based approach for disparity map calculation, and analysis of the building layover signature for different configurations in order to determine building height.

  9. Group-geometric methods in supergravity and superstring theories

    International Nuclear Information System (INIS)

    Castellani, L.

    1992-01-01

    The purpose of this paper is to give a brief and pedagogical account of the group-geometric approach to (super)gravity and superstring theories. The authors summarize the main ideas and apply them to selected examples. Group geometry provides a natural and unified formulation of gravity and gauge theories. The invariance of both are interpreted as diffeomorphisms on a suitable group manifold. This geometrical framework has a fruitful output, in that it provides a systematic algorithm for the gauging of Lie algebras and the construction of (super)gravity or (super)string Lagrangians. The basic idea is to associate fundamental fields to the group generators. This is done by considering first a basis of tangent vectors on the group manifold. These vectors close on the same algebra as the abstract group generators. The dual basis, i.e. the vielbeins (cotangent basis of one-forms) is then identified with the set of fundamental fields. Thus, for example, the vielbein V a and the spin connection ω ab of ordinary Einstein-Cartan gravity are seen as the duals of the tangent vectors corresponding to translations and Lorentz rotations, respectively

  10. Artefacts in geometric phase analysis of compound materials

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Jonathan J.P., E-mail: j.j.p.peters@warwick.ac.uk [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Beanland, Richard; Alexe, Marin [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom); Cockburn, John W.; Revin, Dmitry G.; Zhang, Shiyong Y. [Department of Physics and Astronomy, University of Sheffield, Hounsfield Road, Sheffield S3 7RH (United Kingdom); Sanchez, Ana M., E-mail: a.m.sanchez@warwick.ac.uk [Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom)

    2015-10-15

    The geometric phase analysis (GPA) algorithm is known as a robust and straightforward technique that can be used to measure lattice strains in high resolution transmission electron microscope (TEM) images. It is also attractive for analysis of aberration-corrected scanning TEM (ac-STEM) images that resolve every atom column, since it uses Fourier transforms and does not require real-space peak detection and assignment to appropriate sublattices. Here it is demonstrated that, in ac-STEM images of compound materials with compositionally distinct atom columns, an additional geometric phase is present in the Fourier transform. If the structure changes from one area to another in the image (e.g. across an interface), the change in this additional phase will appear as a strain in conventional GPA, even if there is no lattice strain. Strategies to avoid this pitfall are outlined. - Highlights: • GPA is shown to produce incorrect strains when applied to images of compound materials. • A mathematical description is laid out for why GPA can produce artefacts. • The artefact is demonstrated using experimental and simulated data. • A ‘rule’ is set to avoid this artefact in GPA.

  11. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  12. Bottom-up learning of hierarchical models in a class of deterministic POMDP environments

    Directory of Open Access Journals (Sweden)

    Itoh Hideaki

    2015-09-01

    Full Text Available The theory of partially observable Markov decision processes (POMDPs is a useful tool for developing various intelligent agents, and learning hierarchical POMDP models is one of the key approaches for building such agents when the environments of the agents are unknown and large. To learn hierarchical models, bottom-up learning methods in which learning takes place in a layer-by-layer manner from the lowest to the highest layer are already extensively used in some research fields such as hidden Markov models and neural networks. However, little attention has been paid to bottom-up approaches for learning POMDP models. In this paper, we present a novel bottom-up learning algorithm for hierarchical POMDP models and prove that, by using this algorithm, a perfect model (i.e., a model that can perfectly predict future observations can be learned at least in a class of deterministic POMDP environments

  13. Theory of affine projection algorithms for adaptive filtering

    CERN Document Server

    Ozeki, Kazuhiko

    2016-01-01

    This book focuses on theoretical aspects of the affine projection algorithm (APA) for adaptive filtering. The APA is a natural generalization of the classical, normalized least-mean-squares (NLMS) algorithm. The book first explains how the APA evolved from the NLMS algorithm, where an affine projection view is emphasized. By looking at those adaptation algorithms from such a geometrical point of view, we can find many of the important properties of the APA, e.g., the improvement of the convergence rate over the NLMS algorithm especially for correlated input signals. After the birth of the APA in the mid-1980s, similar algorithms were put forward by other researchers independently from different perspectives. This book shows that they are variants of the APA, forming a family of APAs. Then it surveys research on the convergence behavior of the APA, where statistical analyses play important roles. It also reviews developments of techniques to reduce the computational complexity of the APA, which are important f...

  14. On geometrized gravitation theories

    International Nuclear Information System (INIS)

    Logunov, A.A.; Folomeshkin, V.N.

    1977-01-01

    General properties of the geometrized gravitation theories have been considered. Geometrization of the theory is realized only to the extent that by necessity follows from an experiment (geometrization of the density of the matter Lagrangian only). Aor a general case the gravitation field equations and the equations of motion for matter are formulated in the different Riemann spaces. A covariant formulation of the energy-momentum conservation laws is given in an arbitrary geometrized theory. The noncovariant notion of ''pseudotensor'' is not required in formulating the conservation laws. It is shown that in the general case (i.e., when there is an explicit dependence of the matter Lagrangian density on the covariant derivatives) a symmetric energy-momentum tensor of the matter is explicitly dependent on the curvature tensor. There are enlisted different geometrized theories that describe a known set of the experimental facts. The properties of one of the versions of the quasilinear geometrized theory that describes the experimental facts are considered. In such a theory the fundamental static spherically symmetrical solution has a singularity only in the coordinate origin. The theory permits to create a satisfactory model of the homogeneous nonstationary Universe

  15. A method for optimizing the performance of buildings

    DEFF Research Database (Denmark)

    Pedersen, Frank

    2007-01-01

    needed for solving the optimization problem. Furthermore, the algorithm uses so-called domain constraint functions in order to ensure that the input to the simulation software is feasible. Using this technique avoids performing time-consuming simulations for unrealistic design decisions. The algorithm......This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building...

  16. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection

    Directory of Open Access Journals (Sweden)

    M. Kasagi

    2016-02-01

    Full Text Available A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  17. Geometric Accuracy Investigations of SEVIRI High Resolution Visible (HRV Level 1.5 Imagery

    Directory of Open Access Journals (Sweden)

    Sultan Kocaman Aksakal

    2013-05-01

    Full Text Available GCOS (Global Climate Observing System is a long-term program for monitoring the climate, detecting the changes, and assessing their impacts. Remote sensing techniques are being increasingly used for climate-related measurements. Imagery of the SEVIRI instrument on board of the European geostationary satellites Meteosat-8 and Meteosat-9 are often used for the estimation of essential climate variables. In a joint project between the Swiss GCOS Office and ETH Zurich, geometric accuracy and temporal stability of 1-km resolution HRV channel imagery of SEVIRI have been evaluated over Switzerland. A set of tools and algorithms has been developed for the investigations. Statistical analysis and blunder detection have been integrated in the process for robust evaluation. The relative accuracy is evaluated by tracking large numbers of feature points in consecutive HRV images taken at 15-minute intervals. For the absolute accuracy evaluation, lakes in Switzerland and surroundings are used as reference. 20 lakes digitized from Landsat orthophotos are transformed into HRV images and matched via 2D translation terms at sub-pixel level. The algorithms are tested using HRV images taken on 24 days in 2008 (2 days per month. The results show that 2D shifts that are up to 8 pixels are present both in relative and absolute terms.

  18. Effect of geometric factors on the energy performance of high-rise office towers in Tianjin, China

    OpenAIRE

    Liu, Li; Wu, Di; Li, Xiaojun; Hou, Shanshan; Liu, Conghong; Jones, Phillip John

    2017-01-01

    To improve energy efficiency of office buildings in Tianjin, we select a prototypical high-rise office tower as an example and focus on the effect of geometric factors on building energy performance. These factors include the orientation, plane shape, floor area, plane shape factor (the ratio of the plane length to the plane width, only as regards to a rectangle-shaped plane), floor height, floor number and window-to-wall ratio. The simulation is performed in DesignBuilder, which integrates a...

  19. Geometric metamorphosis.

    Science.gov (United States)

    Niethammer, Marc; Hart, Gabriel L; Pace, Danielle F; Vespa, Paul M; Irimia, Andrei; Van Horn, John D; Aylward, Stephen R

    2011-01-01

    Standard image registration methods do not account for changes in image appearance. Hence, metamorphosis approaches have been developed which jointly estimate a space deformation and a change in image appearance to construct a spatio-temporal trajectory smoothly transforming a source to a target image. For standard metamorphosis, geometric changes are not explicitly modeled. We propose a geometric metamorphosis formulation, which explains changes in image appearance by a global deformation, a deformation of a geometric model, and an image composition model. This work is motivated by the clinical challenge of predicting the long-term effects of traumatic brain injuries based on time-series images. This work is also applicable to the quantification of tumor progression (e.g., estimating its infiltrating and displacing components) and predicting chronic blood perfusion changes after stroke. We demonstrate the utility of the method using simulated data as well as scans from a clinical traumatic brain injury patient.

  20. GIS Data Modeling of a Regional Geological Structure by Integrating Geometric and Semantic Expressions

    Directory of Open Access Journals (Sweden)

    HE Handong

    2017-08-01

    Full Text Available Using GIS, data models of geology via geometric descriptions and expressions are being developed. However, the role played by these data models in terms of the description and expression of geological structure phenomenon is limited. To improve the semantic information in geological GIS data models, this study adopts an object-oriented method that describes and expresses the geometric and semantic features of the geological structure phenomenon using geological objects and designs a data model of regional geological structures by integrating geometry and semantics. Moreover, the study designs a semantic "vocabulary-explanation-graph" method for describing the geological phenomenon of structures. Based on the semantic features of regional geological structures and a linear classification method, it divides the regional geological structure phenomenon into 3 divisions, 10 groups, 33 classes and defines the element set and element class. Moreover, it builds the basic geometric network for geological elements based on the geometric and semantic relations among geological objects. Using the ArcGIS Diagrammer Geodatabase, it considers the regional geological structure of the Ning-Zhen Mountains to verify the data model, and the results indicate a high practicability.

  1. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    International Nuclear Information System (INIS)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-01-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  2. Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials

    Science.gov (United States)

    Ng, Joanna L.; Knothe, Lillian E.; Whan, Renee M.; Knothe, Ulf; Tate, Melissa L. Knothe

    2017-01-01

    We are literally the stuff from which our tissue fabrics and their fibers are woven and spun. The arrangement of collagen, elastin and other structural proteins in space and time embodies our tissues and organs with amazing resilience and multifunctional smart properties. For example, the periosteum, a soft tissue sleeve that envelops all nonarticular bony surfaces of the body, comprises an inherently “smart” material that gives hard bones added strength under high impact loads. Yet a paucity of scalable bottom-up approaches stymies the harnessing of smart tissues’ biological, mechanical and organizational detail to create advanced functional materials. Here, a novel approach is established to scale up the multidimensional fiber patterns of natural soft tissue weaves for rapid prototyping of advanced functional materials. First second harmonic generation and two-photon excitation microscopy is used to map the microscopic three-dimensional (3D) alignment, composition and distribution of the collagen and elastin fibers of periosteum, the soft tissue sheath bounding all nonarticular bone surfaces in our bodies. Then, using engineering rendering software to scale up this natural tissue fabric, as well as multidimensional weaving algorithms, macroscopic tissue prototypes are created using a computer-controlled jacquard loom. The capacity to prototype scaled up architectures of natural fabrics provides a new avenue to create advanced functional materials.

  3. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  4. Geometric Properties of Grassmannian Frames for and

    Directory of Open Access Journals (Sweden)

    Benedetto John J

    2006-01-01

    Full Text Available Grassmannian frames are frames satisfying a min-max correlation criterion. We translate a geometrically intuitive approach for two- and three-dimensional Euclidean space ( and into a new analytic method which is used to classify many Grassmannian frames in this setting. The method and associated algorithm decrease the maximum frame correlation, and hence give rise to the construction of specific examples of Grassmannian frames. Many of the results are known by other techniques, and even more generally, so that this paper can be viewed as tutorial. However, our analytic method is presented with the goal of developing it to address unresovled problems in -dimensional Hilbert spaces which serve as a setting for spherical codes, erasure channel modeling, and other aspects of communications theory.

  5. Continuous-variable geometric phase and its manipulation for quantum computation in a superconducting circuit.

    Science.gov (United States)

    Song, Chao; Zheng, Shi-Biao; Zhang, Pengfei; Xu, Kai; Zhang, Libo; Guo, Qiujiang; Liu, Wuxin; Xu, Da; Deng, Hui; Huang, Keqiang; Zheng, Dongning; Zhu, Xiaobo; Wang, H

    2017-10-20

    Geometric phase, associated with holonomy transformation in quantum state space, is an important quantum-mechanical effect. Besides fundamental interest, this effect has practical applications, among which geometric quantum computation is a paradigm, where quantum logic operations are realized through geometric phase manipulation that has some intrinsic noise-resilient advantages and may enable simplified implementation of multi-qubit gates compared to the dynamical approach. Here we report observation of a continuous-variable geometric phase and demonstrate a quantum gate protocol based on this phase in a superconducting circuit, where five qubits are controllably coupled to a resonator. Our geometric approach allows for one-step implementation of n-qubit controlled-phase gates, which represents a remarkable advantage compared to gate decomposition methods, where the number of required steps dramatically increases with n. Following this approach, we realize these gates with n up to 4, verifying the high efficiency of this geometric manipulation for quantum computation.

  6. Soft functions for generic jet algorithms and observables at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Bertolini, Daniele [Lawrence Berkeley National Laboratory, Berkeley, CA (United States). Theoretical Physics Group; California Univ., Berkeley, CA (United States). Berkeley Center for Theoretical Physics; Kolodrubetz, Daniel; Stewart, Iain W. [Massachusetts Institute of Technology, Cambridge, MA (United States). Center for Theoretical Physics; Duff, Neill [Los Alamos National Laboratory, NM (United States). Theoretical Div.; Massachusetts Institute of Technology, Cambridge, MA (United States). Center for Theoretical Physics; Pietrulewicz, Piotr; Tackmann, Frank J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Waalewijn, Wouter J. [NIKHEF, Amsterdam (Netherlands). Theory Group; Amsterdam Univ. (Netherlands). Inst. for Theoretical Physics Amsterdam and Delta Inst. for Theoretical Physics

    2017-07-15

    We introduce a method to compute one-loop soft functions for exclusive N-jet processes at hadron colliders, allowing for different definitions of the algorithm that determines the jet regions and of the measurements in those regions. In particular, we generalize the N-jettiness hemisphere decomposition of T. T. Joutennus et al. (2011) in a manner that separates the dependence on the jet boundary from the observables measured inside the jet and beam regions. Results are given for several factorizable jet definitions, including anti-k{sub T}, XCone, and other geometric partitionings. We calculate explicitly the soft functions for angularity measurements, including jet mass and jet broadening, in pp→L+1 jet and explore the differences for various jet vetoes and algorithms. This includes a consistent treatment of rapidity divergences when applicable. We also compute analytic results for these soft functions in an expansion for a small jet radius R. We find that the small-R results, including corrections up to O(R{sup 2}), accurately capture the full behavior over a large range of R.

  7. Document turn-over analysis to determine need of NPP construction in build-up structures of reinforced concrete

    International Nuclear Information System (INIS)

    Vojpe, D.K.; Lyubavin, V.K.

    1986-01-01

    Document turn-over to determine used of NPP construction in build-up structures of reinforced concrete is carried out. Ways of improving determination of needs of NPP construction board in the mentioned structures are pointed out

  8. A face recognition algorithm based on multiple individual discriminative models

    DEFF Research Database (Denmark)

    Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2005-01-01

    Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associate...

  9. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  10. An Efficient Return Algorithm for Non-Associated Mohr-Coulomb Plasticity

    DEFF Research Database (Denmark)

    Clausen, Johan Christian; Damkilde, Lars; Andersen, Lars

    2005-01-01

    . The stress return and the formation of the constitutive matrix is carried out in principal stress space, where the manipulations simplify and rely on geometrical arguments. The singularities arising at the intersection of yield planes are dealt with in a straightforward way also based on geometric......An efficient return algorithm for stress update in numerical plasticity computations is presented. The yield criterion must be linear in principal stress space, and can be composed of any number of yield planes. Each of these yield planes can have an associated or non-associated flow rule...

  11. Multi-objective mixture-based iterated density estimation evolutionary algorithms

    NARCIS (Netherlands)

    Thierens, D.; Bosman, P.A.N.

    2001-01-01

    We propose an algorithm for multi-objective optimization using a mixture-based iterated density estimation evolutionary algorithm (MIDEA). The MIDEA algorithm is a prob- abilistic model building evolutionary algo- rithm that constructs at each generation a mixture of factorized probability

  12. Statistical and Geometrical Way of Model Selection for a Family of Subdivision Schemes

    Institute of Scientific and Technical Information of China (English)

    Ghulam MUSTAFA

    2017-01-01

    The objective of this article is to introduce a generalized algorithm to produce the m-point n-ary approximating subdivision schemes (for any integer m,n ≥ 2).The proposed algorithm has been derived from uniform B-spline blending functions.In particular,we study statistical and geometrical/traditional methods for the model selection and assessment for selecting a subdivision curve from the proposed family of schemes to model noisy and noisy free data.Moreover,we also discuss the deviation of subdivision curves generated by proposed family of schemes from convex polygonal curve.Furthermore,visual performances of the schemes have been presented to compare numerically the Gibbs oscillations with the existing family of schemes.

  13. A method of geometrical factors in the theory and interpretation of formation density logging

    International Nuclear Information System (INIS)

    Kozhevnikov, D.A.; Khathmullin, I.Ph.

    1990-01-01

    An interpretational model based on the ''radial geometrical factors concept'' is developed to describe the count-rate of a formation density logging (FDL) multi-spaced tool. The model includes two metrological parameters for each detector-source pair of a multi-spaced probe. These are: sensitivity to formation density, S, and radial sensitivity a. Apart from its universal application, the algorithm also allows some diagnoses of the intermediate zone to be made; that is, to reveal zones of consolidation and fracturing. It is shown that empirical algorithms realizing different forms of ''spine and ribs'' charts may be derived from the general algorithm. There is a practical possibility of resolving problems associated with the vicinity of the borehole wall by means of a triple-spaced FDL tool. It is given a corresponding algorithm and a metrological optimization procedure. The validity of the relations established is substantiated by physical measurements and by Monte-Carlo modelling. (author)

  14. Margins for geometric uncertainty around organs at risk in radiotherapy

    International Nuclear Information System (INIS)

    McKenzie, Alan; Herk, Marcel van; Mijnheer, Ben

    2002-01-01

    Background and purpose: ICRU Report 62 suggests drawing margins around organs at risk (ORs) to produce planning organ at risk volumes (PRVs) to account for geometric uncertainty in the radiotherapy treatment process. This paper proposes an algorithm for drawing such margins, and compares the recommended margin widths with examples from clinical practice and discusses the limitations of the approach. Method: The use of the PRV defined in this way is that, despite the geometric uncertainties, the dose calculated within the PRV by the treatment planning system can be used to represent the dose in the OR with a certain confidence level. A suitable level is where, in the majority of cases (90%), the dose-volume histogram of the PRV will not under-represent the high-dose components in the OR. In order to provide guidelines on how to do this in clinical practice, this paper distinguishes types of OR in terms of the tolerance doses relative to the prescription dose and suggests appropriate margins for serial-structure and parallel-structure ORs. Results: In some instances of large and parallel ORs, the clinician may judge that the complication risk in omitting a margin is acceptable. Otherwise, for all types of OR, systematic, treatment preparation uncertainties may be accommodated by an OR→PRV margin width of 1.3Σ. Here, Σ is the standard deviation of the combined systematic (treatment preparation) uncertainties. In the case of serial ORs or small, parallel ORs, the effects of blurring caused by daily treatment execution errors (set-up and organ motion) should be taken into account. Near a region of high dose, blurring tends to shift the isodoses away from the unblurred edge as shown on the treatment planning system by an amount that may be represented by 0.5σ. This margin may be used either to increase or to decrease the margin already calculated for systematic uncertainties, depending upon the size of the tolerance dose relative to the detailed planned dose

  15. Knowledge Based 3d Building Model Recognition Using Convolutional Neural Networks from LIDAR and Aerial Imageries

    Science.gov (United States)

    Alidoost, F.; Arefi, H.

    2016-06-01

    In recent years, with the development of the high resolution data acquisition technologies, many different approaches and algorithms have been presented to extract the accurate and timely updated 3D models of buildings as a key element of city structures for numerous applications in urban mapping. In this paper, a novel and model-based approach is proposed for automatic recognition of buildings' roof models such as flat, gable, hip, and pyramid hip roof models based on deep structures for hierarchical learning of features that are extracted from both LiDAR and aerial ortho-photos. The main steps of this approach include building segmentation, feature extraction and learning, and finally building roof labeling in a supervised pre-trained Convolutional Neural Network (CNN) framework to have an automatic recognition system for various types of buildings over an urban area. In this framework, the height information provides invariant geometric features for convolutional neural network to localize the boundary of each individual roofs. CNN is a kind of feed-forward neural network with the multilayer perceptron concept which consists of a number of convolutional and subsampling layers in an adaptable structure and it is widely used in pattern recognition and object detection application. Since the training dataset is a small library of labeled models for different shapes of roofs, the computation time of learning can be decreased significantly using the pre-trained models. The experimental results highlight the effectiveness of the deep learning approach to detect and extract the pattern of buildings' roofs automatically considering the complementary nature of height and RGB information.

  16. [Study of building quantitative analysis model for chlorophyll in winter wheat with reflective spectrum using MSC-ANN algorithm].

    Science.gov (United States)

    Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui

    2010-01-01

    Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.

  17. Humidity Build-Up in a Typical Electronic Enclosure Exposed to Cycling Conditions and Effect on Corrosion Reliability

    DEFF Research Database (Denmark)

    Conseil, Helene; Gudla, Visweswara Chakravarthy; Jellesen, Morten Stendahl

    2016-01-01

    The design of electronic device enclosures plays a major role in determining the humidity build-up inside the device as a response to the varying external humidity. Therefore, the corrosion reliability of electronic devices has direct connection to the enclosure design. This paper describes......, thermal mass, and port/opening size. The effect of the internal humidity build-up on corrosion reliability has been evaluated by measuring the leakage current (LC) on interdigitated test comb patterns, which are precontaminated with sodium chloride and placed inside the enclosure. The results showed...... that the exposure to cycling temperature causes significant change of internal water vapor concentration. The maximum value of humidity reached was a function of the opening size and the presence of thermal mass inside the enclosure. A pumping effect was observed due to cycling temperature, and the increase...

  18. Characteristics of Recycled Concrete Aggregates from Precast Slab Block Buildings

    Science.gov (United States)

    Venkrbec, Václav; Nováková, Iveta; Henková, Svatava

    2017-10-01

    Precast slab block buildings (PSBB) typically and frequently occur in Central and Eastern Europe, as well as elsewhere in the world. Some of these buildings are currently used beyond their service life capacity. The utilization of recycled materials from these buildings with regard to applying the principles of sustainable construction and using recycled materials will probably be significant in the following years. Documentation from the manufacturing processes of prefabricated blocks for precast slab block buildings is not available, and also it is difficult to declare technological discipline during the construction of these buildings. Therefore, properties of recycled concrete aggregates (RCA) produced from construction and demolition waste (C&DW) of precast slab block buildings build between 1950s to 1990s are not sufficiently known. The demolition of these buildings is very rare today, but it can be assumed an increase in demolitions of these buildings in the future. The use of RCA in new concrete requires verification/testing of the geometrical and physical properties of RCA according to the EN 12 620+A1 standard. The aim of the contribution is to present a case study of the demolition of slab block building with emphasis on RCA usage. The paper presents the results of the tests according to European standards for determining selected geometrical and physical properties of the RCA. The paper describes and evaluates tests such as determination of particle size distribution - Sieve Analysis, content of fine particles, determination of density and water absorption. The results of the properties testing of RCA are compared with the properties of natural aggregate. The general boundary conditions of RCA particular tests are presented.

  19. Photovoltaic building sheathing element with anti-slide features

    Science.gov (United States)

    Keenihan, James R.; Langmaid, Joseph A.; Lopez, Leonardo C.

    2015-09-08

    The present invention is premised` upon an assembly that includes at least a photovoltaic building sheathing element capable of being affixed on a building structure, the photovoltaic building sheathing element. The element including a photovoltaic cell assembly, a body portion attached to one or more portions of the photovoltaic cell assembly; and at feast a first and a second connector assembly capable of directly or indirectly electrically connecting the photovoltaic cell assembly to one or more adjoining devices; wherein the body portion includes one or more geometric features adapted to engage a vertically adjoining device before installation.

  20. Building Evacuation with Mobile Devices

    OpenAIRE

    Merkel, Sabrina

    2014-01-01

    The rapidly growing world population and increasingly dense settlements demand ever-larger and more complex buildings from today's engineers. In comparison to this technological progress, a building's equipment for emergency evacuation has been hardly developed further. This work presents a concept for a building evacuation system based on mobile devices. Furthermore, various algorithms for route planning with mobile devices and for indoor localization of mobile devices are addressed.

  1. Geometric Mixing, Peristalsis, and the Geometric Phase of the Stomach.

    Science.gov (United States)

    Arrieta, Jorge; Cartwright, Julyan H E; Gouillart, Emmanuelle; Piro, Nicolas; Piro, Oreste; Tuval, Idan

    2015-01-01

    Mixing fluid in a container at low Reynolds number--in an inertialess environment--is not a trivial task. Reciprocating motions merely lead to cycles of mixing and unmixing, so continuous rotation, as used in many technological applications, would appear to be necessary. However, there is another solution: movement of the walls in a cyclical fashion to introduce a geometric phase. We show using journal-bearing flow as a model that such geometric mixing is a general tool for using deformable boundaries that return to the same position to mix fluid at low Reynolds number. We then simulate a biological example: we show that mixing in the stomach functions because of the "belly phase," peristaltic movement of the walls in a cyclical fashion introduces a geometric phase that avoids unmixing.

  2. Geometric Mixing, Peristalsis, and the Geometric Phase of the Stomach.

    Directory of Open Access Journals (Sweden)

    Jorge Arrieta

    Full Text Available Mixing fluid in a container at low Reynolds number--in an inertialess environment--is not a trivial task. Reciprocating motions merely lead to cycles of mixing and unmixing, so continuous rotation, as used in many technological applications, would appear to be necessary. However, there is another solution: movement of the walls in a cyclical fashion to introduce a geometric phase. We show using journal-bearing flow as a model that such geometric mixing is a general tool for using deformable boundaries that return to the same position to mix fluid at low Reynolds number. We then simulate a biological example: we show that mixing in the stomach functions because of the "belly phase," peristaltic movement of the walls in a cyclical fashion introduces a geometric phase that avoids unmixing.

  3. Regularities of radiation defects build up on oxide materials surface; Zakonomernosti nakopleniya radiatsionnykh defektov na poverkhnosti oksidnykh materialov

    Energy Technology Data Exchange (ETDEWEB)

    Bitenbaev, M I; Polyakov, A I [Fiziko-Tekhnicheskij Inst., Almaty (Kazakhstan); Tuseev, T [Inst. Yadernoj Fiziki, Almaty (Kazakhstan)

    2005-07-01

    Analysis of experimental data by radiation defects study on different oxide elements (silicon, beryllium, aluminium, rare earth elements) irradiated by the photo-, gamma-, neutron-, alpha- radiation, protons and helium ions show, that gas adsorption process on the surface centers and radiation defects build up in metal oxide correlated between themselves. These processes were described by the equivalent kinetic equations for analysis of radiation defects build up in the different metal oxides. It was revealed in the result of the analysis: number of radiation defects are droningly increasing up to limit value with the treatment temperature growth. Constant of radicals death at ionizing radiation increases as well. Amount of surface defects in different oxides defining absorbing activity of these materials looks as: silicon oxide{yields}beryllium oxide{yields}aluminium oxide. So it was found, that most optimal material for absorbing system preparation is silicon oxide by it power intensity and berylium oxide by it adsorption efficiency.

  4. Role of urban surface roughness in road-deposited sediment build-up and wash-off

    Science.gov (United States)

    Zhao, Hongtao; Jiang, Qian; Xie, Wenxia; Li, Xuyong; Yin, Chengqing

    2018-05-01

    Urban road surface roughness is one of the most important factors in estimation of surface runoff loads caused by road-deposited sediment (RDS) wash-off and design of its control measures. However, because of a lack of experimental data to distinguish the role of surface roughness, the effects of surface roughness on RDS accumulation and release are not clear. In this study, paired asphalt and concrete road surfaces and rainfall simulation designs were used to distinguish the role of surface roughness in RDS build-up and wash-off. Our results showed that typical asphalt surfaces often have higher depression depths than typical concrete surfaces, indicating that asphalt surfaces are relatively rougher than concrete surface. Asphalt surfaces can retain a larger RDS amount, relative higher percentage of coarser particles, larger RDS wash-off loads, and lower wash-off percentage, than concrete surfaces. Surface roughness has different effects in RDS motilities with different particle sizes during rainfall runoff, and the settleable particles (44-149 μm) were notably influenced by it. Furthermore, the first flush phenomenon tended to be greater on relatively smooth surfaces than relatively rough surfaces. Overall, surface roughness plays an important role in influencing the complete process of RDS build-up and wash-off on different road characteristics.

  5. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    Science.gov (United States)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  6. Diagnostic information system dynamics in the evaluation of machine learning algorithms for the supervision of energy efficiency of district heating-supplied buildings

    International Nuclear Information System (INIS)

    Kiluk, Sebastian

    2017-01-01

    Highlights: • Energy efficiency classification sustainability benefits from knowledge prediction. • Diagnostic classification can be validated with its dynamics and current data. • Diagnostic classification dynamics provides novelty extraction for knowledge update. • Data mining comparison can be performed with knowledge dynamics and uncertainty. • Diagnostic information refinement benefits form comparing classifiers dynamics. - Abstract: Modern ways of exploring the diagnostic knowledge provided by data mining and machine learning raise some concern about the ways of evaluating the quality of output knowledge, usually represented by information systems. Especially in district heating, the stationarity of efficiency models, and thus the relevance of diagnostic classification system, cannot be ensured due to the impact of social, economic or technological changes, which are hard to identify or predict. Therefore, data mining and machine learning have become an attractive strategy for automatically and continuously absorbing such dynamics. This paper presents a new method of evaluation and comparison of diagnostic information systems gathered algorithmically in district heating efficiency supervision based on exploring the evolution of information system and analyzing its dynamic features. The process of data mining and knowledge discovery was applied to the data acquired from district heating substations’ energy meters to provide the automated discovery of diagnostic knowledge base necessary for the efficiency supervision of district heating-supplied buildings. The implemented algorithm consists of several steps of processing the billing data, including preparation, segmentation, aggregation and knowledge discovery stage, where classes of abstract models representing energy efficiency constitute an information system representing diagnostic knowledge about the energy efficiency of buildings favorably operating under similar climate conditions and

  7. Reconstrucción filogenética usando geometría algebraica

    Directory of Open Access Journals (Sweden)

    Casanellas, Marta

    2010-12-01

    Full Text Available A new approach to phylogenetic reconstruction has been emerging in the last years. Given an evolutionary model, the joint probability distribution of the nucleotides for these species satisfy some algebraic constraints called invariants. These invariants have theoretical and practical interest, since they can be used to infer phylogenies. In this paper, we explain how to use these invariants to design algorithms for phylogenetic reconstruction and we show how the application of tools and theoretical results coming from commutative algebra and algebraic geometry can improve the performance and the efficiency of these algorithms.

    Una nueva aproximación a la reconstrucción filogenética basada en la geometría algebraica está ganando fuerza en los últimos años. Fijado un modelo evolutivo para un conjunto de especies, las distribuciones teóricas de los nucleótidos de estas especies satisfacen ciertas relaciones algebraicas que llamamos invariantes. Estos invariantes son de interés teórico y práctico dado que se pueden utilizar para inferir filogenias. En este artículo, explicamos cómo usar los invariantes para implementar algoritmos de reconstrucción filogenética y mostramos cómo el uso de técnicas y resultados teóricos procedentes del álgebra conmutativa y la geometría algebraica puede contribuir en la mejora en la eficacia y la eficiencia de estos algoritmos.

  8. Geometric Positioning Accuracy Improvement of ZY-3 Satellite Imagery Based on Statistical Learning Theory

    Directory of Open Access Journals (Sweden)

    Niangang Jiao

    2018-05-01

    Full Text Available With the increasing demand for high-resolution remote sensing images for mapping and monitoring the Earth’s environment, geometric positioning accuracy improvement plays a significant role in the image preprocessing step. Based on the statistical learning theory, we propose a new method to improve the geometric positioning accuracy without ground control points (GCPs. Multi-temporal images from the ZY-3 satellite are tested and the bias-compensated rational function model (RFM is applied as the block adjustment model in our experiment. An easy and stable weight strategy and the fast iterative shrinkage-thresholding (FIST algorithm which is widely used in the field of compressive sensing are improved and utilized to define the normal equation matrix and solve it. Then, the residual errors after traditional block adjustment are acquired and tested with the newly proposed inherent error compensation model based on statistical learning theory. The final results indicate that the geometric positioning accuracy of ZY-3 satellite imagery can be improved greatly with our proposed method.

  9. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach

  10. DOA estimation for conformal vector-sensor array using geometric algebra

    Science.gov (United States)

    Meng, Tianzhen; Wu, Minjie; Yuan, Naichang

    2017-12-01

    In this paper, the problem of direction of arrival (DOA) estimation is considered in the case of multiple polarized signals impinging on the conformal electromagnetic vector-sensor array (CVA). We focus on modeling the manifold holistically by a new mathematical tool called geometric algebra. Compared with existing methods, the presented one has two main advantages. Firstly, it acquires higher resolution by preserving the orthogonality of the signal components. Secondly, it avoids the cumbersome matrix operations while performing the coordinate transformations, and therefore, has a much lower computational complexity. Simulation results are provided to demonstrate the effectiveness of the proposed algorithm.

  11. Geometrical optical illusionists.

    Science.gov (United States)

    Wade, Nicholas J

    2014-01-01

    Geometrical optical illusions were given this title by Oppel in 1855. Variants on such small distortions of visual space were illustrated thereafter, many of which bear the names of those who first described them. Some original forms of the geometrical optical illusions are shown together with 'perceptual portraits' of those who described them. These include: Roget, Chevreul, Fick, Zöllner, Poggendorff, Hering, Kundt, Delboeuf Mach, Helmholtz, Hermann, von Bezold, Müller-Lyer, Lipps, Thiéry, Wundt, Münsterberg, Ebbinghaus, Titchener, Ponzo, Luckiesh, Sander, Ehrenstein, Gregory, Heard, White, Shepard, and. Lingelbach. The illusions are grouped under the headings of orientation, size, the combination of size and orientation, and contrast. Early theories of illusions, before geometrical optical illusions were so named, are mentioned briefly.

  12. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  13. Optimal interconnection trees in the plane theory, algorithms and applications

    CERN Document Server

    Brazil, Marcus

    2015-01-01

    This book explores fundamental aspects of geometric network optimisation with applications to a variety of real world problems. It presents, for the first time in the literature, a cohesive mathematical framework within which the properties of such optimal interconnection networks can be understood across a wide range of metrics and cost functions. The book makes use of this mathematical theory to develop efficient algorithms for constructing such networks, with an emphasis on exact solutions.  Marcus Brazil and Martin Zachariasen focus principally on the geometric structure of optimal interconnection networks, also known as Steiner trees, in the plane. They show readers how an understanding of this structure can lead to practical exact algorithms for constructing such trees.  The book also details numerous breakthroughs in this area over the past 20 years, features clearly written proofs, and is supported by 135 colour and 15 black and white figures. It will help graduate students, working mathematicians, ...

  14. Unconstrained Finite Element for Geometrical Nonlinear Dynamics of Shells

    Directory of Open Access Journals (Sweden)

    Humberto Breves Coda

    2009-01-01

    Full Text Available This paper presents a positional FEM formulation to deal with geometrical nonlinear dynamics of shells. The main objective is to develop a new FEM methodology based on the minimum potential energy theorem written regarding nodal positions and generalized unconstrained vectors not displacements and rotations. These characteristics are the novelty of the present work and avoid the use of large rotation approximations. A nondimensional auxiliary coordinate system is created, and the change of configuration function is written following two independent mappings from which the strain energy function is derived. This methodology is called positional and, as far as the authors' knowledge goes, is a new procedure to approximated geometrical nonlinear structures. In this paper a proof for the linear and angular momentum conservation property of the Newmark algorithm is provided for total Lagrangian description. The proposed shell element is locking free for elastic stress-strain relations due to the presence of linear strain variation along the shell thickness. The curved, high-order element together with an implicit procedure to solve nonlinear equations guarantees precision in calculations. The momentum conserving, the locking free behavior, and the frame invariance of the adopted mapping are numerically confirmed by examples.

  15. The evaluation of nylon and polyethylene as build-up material in a neutron therapy beam

    International Nuclear Information System (INIS)

    Hough, J.H.; Binns, P.J.

    1995-01-01

    In high-energy neutron beams a substantial amount of build-up material is required to irradiate biological samples under conditions of charged particle equilibrium. Ideally A-150 tissue-equivalent plastic is used for this purpose. This material is however not always readily available and hence the need for a substitute compound. The selected hydrocarbon should satisfy two requirements: the quality of the radiation on the distal side needs to be the same as that measured for A-150 plastic and the absorbed dose should remain consistent. A tissue-equivalent proportional counter operating at reduced pressure not only measures the absorbed dose accurately but provides a means for assessing the nature of a radiation field in terms of a secondary charged particle spectrum. Using build-up caps manufactured from nylon (type 6) and polyethylene, it is shown that the former is an acceptable substitute for A-150 plastic. The data further demonstrate that both the absorbed dose and the spectral character of the measured single-event distribution are altered when polyethylene is used and that these discrepancies are attributable to the higher hydrogen content of polyethylene. (Author)

  16. Cost and performance optimization of natural draft dry cooling towers using genetic algorithm. Paper no. IGEC-1-002

    International Nuclear Information System (INIS)

    Shokuhmand, H.; Ghaempanah, B.

    2005-01-01

    In this paper the cost - performance optimization of natural draft dry cooling towers with specific kind of heat exchangers, known as Forgo T60 has been investigated. These cooling towers are used in combined and steam cycle power plants. The optimization has been done using genetic algorithm. The objective function has two parts, which are minimizing the cost and maximizing the performance. In the first part the geometrical and operating parameters are defined and for the next part the performance of the designed tower for different ambient temperatures during a year is calculated considering the characteristic curve of the turbine. The applied genetic algorithm has been tuned up using the data of some working power cycles. The results show it is possible to find an optimum for all design parameters; however it is very dependent on how exact the cost analysis is. (author)

  17. A review on control system algorithm for building automation systems

    CSIR Research Space (South Africa)

    Noubissie-Tientcheu, SI

    2016-09-01

    Full Text Available The building with its components such as Heating Ventilation Air Conditioning (HVAC) and lighting constitute a bigger part of energy consumption in Southern Africa. Control system in a building reduced the energy consumption, according to different...

  18. The Build-Up to Eruptive Solar Events Viewed as the Development of Chiral Systems

    Science.gov (United States)

    Martin, S. F.; Panasenco, O.; Berger, M. A.; Engvold, O.; Lin, Y.; Pevtsov, A. A.; Srivastava, N.

    2012-12-01

    When we examine the chirality or observed handedness of the chromospheric and coronal structures involved in the long-term build-up to eruptive events, we find that they evolve in very specific ways to form two and only two sets of large-scale chiral systems. Each system contains spatially separated components with both signs of chirality, the upper portion having negative (positive) chirality and the lower part possessing positive (negative) chirality. The components within a system are a filament channel (represented partially by sets of chromospheric fibrils), a filament (if present), a filament cavity, sometimes a sigmoid, and always an overlying arcade of coronal loops. When we view these components as parts of large-scale chiral systems, we more clearly see that it is not the individual components of chiral systems that erupt but rather it is the approximate upper parts of an entire evolving chiral system that erupts. We illustrate the typical pattern of build-up to eruptive solar events first without and then including the chirality in each stage of the build-up. We argue that a complete chiral system has one sign of handedness above the filament spine and the opposite handedness in the barbs and filament channel below the filament spine. If the spine has handedness, the observations favor its having the handedness of the filament cavity and coronal loops above. As the separate components of a chiral system form, we show that the system appears to maintain a balance of right-handed and left-handed features, thus preserving an initial near-zero net helicity. We further argue that the chiral systems allow us to identify key sites of energy transformation and stored energy later dissipated in the form of concurrent CMEs, erupting filaments and solar flares. Each individual chiral system may produce many successive eruptive events above a single filament channel. Because major eruptive events apparently do not occur independent of, or outside of, these unique

  19. AN IMPROVED SNAKE MODEL FOR REFINEMENT OF LIDAR-DERIVED BUILDING ROOF CONTOURS USING AERIAL IMAGES

    Directory of Open Access Journals (Sweden)

    Q. Chen

    2016-06-01

    Full Text Available Building roof contours are considered as very important geometric data, which have been widely applied in many fields, including but not limited to urban planning, land investigation, change detection and military reconnaissance. Currently, the demand on building contours at a finer scale (especially in urban areas has been raised in a growing number of studies such as urban environment quality assessment, urban sprawl monitoring and urban air pollution modelling. LiDAR is known as an effective means of acquiring 3D roof points with high elevation accuracy. However, the precision of the building contour obtained from LiDAR data is restricted by its relatively low scanning resolution. With the use of the texture information from high-resolution imagery, the precision can be improved. In this study, an improved snake model is proposed to refine the initial building contours extracted from LiDAR. First, an improved snake model is constructed with the constraints of the deviation angle, image gradient, and area. Then, the nodes of the contour are moved in a certain range to find the best optimized result using greedy algorithm. Considering both precision and efficiency, the candidate shift positions of the contour nodes are constrained, and the searching strategy for the candidate nodes is explicitly designed. The experiments on three datasets indicate that the proposed method for building contour refinement is effective and feasible. The average quality index is improved from 91.66% to 93.34%. The statistics of the evaluation results for every single building demonstrated that 77.0% of the total number of contours is updated with higher quality index.

  20. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    Science.gov (United States)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  1. Geometric Constructions with the Computer.

    Science.gov (United States)

    Chuan, Jen-chung

    The computer can be used as a tool to represent and communicate geometric knowledge. With the appropriate software, a geometric diagram can be manipulated through a series of animation that offers more than one particular snapshot as shown in a traditional mathematical text. Geometric constructions with the computer enable the learner to see and…

  2. Markov chain algorithms: a template for building future robust low-power systems

    Science.gov (United States)

    Deka, Biplab; Birklykke, Alex A.; Duwe, Henry; Mansinghka, Vikash K.; Kumar, Rakesh

    2014-01-01

    Although computational systems are looking towards post CMOS devices in the pursuit of lower power, the expected inherent unreliability of such devices makes it difficult to design robust systems without additional power overheads for guaranteeing robustness. As such, algorithmic structures with inherent ability to tolerate computational errors are of significant interest. We propose to cast applications as stochastic algorithms based on Markov chains (MCs) as such algorithms are both sufficiently general and tolerant to transition errors. We show with four example applications—Boolean satisfiability, sorting, low-density parity-check decoding and clustering—how applications can be cast as MC algorithms. Using algorithmic fault injection techniques, we demonstrate the robustness of these implementations to transition errors with high error rates. Based on these results, we make a case for using MCs as an algorithmic template for future robust low-power systems. PMID:24842030

  3. Time domain simulation of the response of geometrically nonlinear panels subjected to random loading

    Science.gov (United States)

    Moyer, E. Thomas, Jr.

    1988-01-01

    The response of composite panels subjected to random pressure loads large enough to cause geometrically nonlinear responses is studied. A time domain simulation is employed to solve the equations of motion. An adaptive time stepping algorithm is employed to minimize intermittent transients. A modified algorithm for the prediction of response spectral density is presented which predicts smooth spectral peaks for discrete time histories. Results are presented for a number of input pressure levels and damping coefficients. Response distributions are calculated and compared with the analytical solution of the Fokker-Planck equations. RMS response is reported as a function of input pressure level and damping coefficient. Spectral densities are calculated for a number of examples.

  4. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  5. Gamma-ray energy buildup factor calculations and shielding effects of some Jordanian building structures

    Science.gov (United States)

    Sharaf, J. M.; Saleh, H.

    2015-05-01

    The shielding properties of three different construction styles, and building materials, commonly used in Jordan, were evaluated using parameters such as attenuation coefficients, equivalent atomic number, penetration depth and energy buildup factor. Geometric progression (GP) method was used to calculate gamma-ray energy buildup factors of limestone, concrete, bricks, cement plaster and air for the energy range 0.05-3 MeV, and penetration depths up to 40 mfp. It has been observed that among the examined building materials, limestone offers highest value for equivalent atomic number and linear attenuation coefficient and the lowest values for penetration depth and energy buildup factor. The obtained buildup factors were used as basic data to establish the total equivalent energy buildup factors for three different multilayer construction styles using an iterative method. The three styles were then compared in terms of fractional transmission of photons at different incident photon energies. It is concluded that, in case of any nuclear accident, large multistory buildings with five layers exterior walls, style A, could effectively attenuate radiation more than small dwellings of any construction style.

  6. Axial geometrical aberration correction up to 5th order with N-SYLC.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Takaoka, Akio; Nishi, Ryuji

    2017-11-01

    We present N-SYLC (N-fold symmetric line currents) models to correct 5th order axial geometrical aberrations in electron microscopes. In our previous paper, we showed that 3rd order spherical aberration can be corrected by 3-SYLC doublet. After that, mainly the 5th order aberrations remain to limit the resolution. In this paper, we extend the doublet to quadruplet models also including octupole and dodecapole fields for correcting these higher order aberrations, without introducing any new unwanted ones. We prove the validity of our models by analytical calculations. Also by computer simulations, we show that for beam energy of 5keV and initial angle 10mrad at the corrector object plane, beam size of less than 0.5nm is achieved at the corrector image plane. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  8. Investigation of the Geometrical Distortions in the Nuclear Emulsion

    International Nuclear Information System (INIS)

    Batusov, Yu.A.; Rumyantseva, V.P.; Soroko, L.M.; Tereshchenko, V.V.

    1994-01-01

    The geometrical distortions in the nuclear emulsion were investigated by means of two devices: 1) stereoscopic meso-optical Fourier transform microscope (MFTM) and 2) traditional optical microscope (KSM-1) designed for precise measurements. The particle tracks were produced by primary Oxygen-nuclei with impulse 65.6 GeV/c and by secondary α-particles in various regions of the nuclear emulsion. The measurement errors were: 1.8' (angular minute) for orientation angle θ xy ; 2.7' (angular minute) for dip angle θ z ; 0.3 μm for transverse coordinate x; 0.1 μm for longitudinal coordinate y and 0.3 μm for depth coordinate z. The effect of the global forced bending of the nuclear emulsion glass support was detected and estimated as dθ z /dy=2' (angular minute) per mm. To suppress the local geometrical distortions, a difference plot was calculated for two secondary α-particles going very close within ≤ 10 μm over the distance 6 mm. It was shown that this mode of the local geometrical distortions is kept constant over the mutual transverse distances up to 0.6 mm. By observing the zy-plots of four secondary α-particles we have isolated the rotating mode of the local geometrical distortions in the nuclear emulsion. 5 refs., 11 figs

  9. The effect of different initial densities of nematode (Meloidogyne javanica) on the build-up of Pasteuria penetrans population.

    Science.gov (United States)

    Darban, Daim Ali; Pathan, Mumtaz Ali; Bhatti, Abdul Ghaffar; Maitelo, Sultan Ahmed

    2005-02-01

    Pasteuria penetrans will build-up faster where there is a high initial nematode density and can suppress root-knot nematode populations in the roots of tomato plants. The effect of different initial densities of nematode (Meloidogyne javanica) (150, 750, 1500, 3000) and P. penetrans infected females (F1, F3) densities (F0=control and AC=absolute control without nematode or P. penetrans inoculum) on the build-up of Pasteuria population was investigated over four crop cycles. Two major points of interest were highlighted. First, that within a confined soil volume, densities of P. penetrans can increase >100 times within 2 or 3 crop cycles. Second, from a relatively small amount of spore inoculum, infection of the host is very high. There were more infected females in the higher P. penetrans doses. The root growth data confirms the greater number of females in the controls particularly at the higher inoculum densities in the third and fourth crops. P. penetrans generally caused the fresh root weights to be higher than those in the control. P. penetrans has shown greater reduction of egg masses per plant at most densities. The effects of different initial densities of M. javanica and P. penetrans on the development of the pest and parasite populations were monitored. And no attempt was made to return the P. penetrans spores to the pots after each crop so the build-up in actual numbers of infected females and spores under natural conditions may be underestimated.

  10. Temperature rise and Heat build up inside a parked Car

    Science.gov (United States)

    Coady, Rose; Maheswaranathan, Ponn

    2001-11-01

    We have studied the heat build up inside a parked car under the hot summer Sun. Inside and outside temperatures were monitored every ten seconds from 9 AM to about 4 PM for a 2000 Toyota Camry parked in a Winthrop University parking lot without any shades or trees. Two PASCO temperature sensors, one inside the car and the other outside the car, are used along with PASCO-750 interface to collect the data. Data were collected under the following conditions while keeping track of the outside weather: fully closed windows, slightly open windows, half way open windows, fully open windows, and with window shades inside and outside. Inside temperatures reached as high as 150 degrees Fahrenheit on a sunny day with outside high temperature of about 100 degrees Fahrenheit. These results will be presented along with results from car cover and window tint manufacturers and suggestions to keep your car cool next time you park it under the Sun.

  11. Muon reconstruction with a geometrical model in JUNO

    Science.gov (United States)

    Genster, C.; Schever, M.; Ludhova, L.; Soiron, M.; Stahl, A.; Wiebusch, C.

    2018-03-01

    The Jiangmen Neutrino Underground Observatory (JUNO) is a 20 kton liquid scintillator detector currently under construction near Kaiping in China. The physics program focuses on the determination of the neutrino mass hierarchy with reactor anti-neutrinos. For this purpose, JUNO is located 650 m underground with a distance of 53 km to two nuclear power plants. As a result, it is exposed to a muon flux that requires a precise muon reconstruction to make a veto of cosmogenic backgrounds viable. Established muon tracking algorithms use time residuals to a track hypothesis. We developed an alternative muon tracking algorithm that utilizes the geometrical shape of the fastest light. It models the full shape of the first, direct light produced along the muon track. From the intersection with the spherical PMT array, the track parameters are extracted with a likelihood fit. The algorithm finds a selection of PMTs based on their first hit times and charges. Subsequently, it fits on timing information only. On a sample of through-going muons with a full simulation of readout electronics, we report a spatial resolution of 20 cm of distance from the detector's center and an angular resolution of 1.6o over the whole detector. Additionally, a dead time estimation is performed to measure the impact of the muon veto. Including the step of waveform reconstruction on top of the track reconstruction, a loss in exposure of only 4% can be achieved compared to the case of a perfect tracking algorithm. When including only the PMT time resolution, but no further electronics simulation and waveform reconstruction, the exposure loss is only 1%.

  12. Reconstruction of Eroded and Visually Complicated Archaeological Geometric Patterns: Minaret Choli, Iraq

    Directory of Open Access Journals (Sweden)

    Rima Al Ajlouni

    2011-12-01

    Full Text Available Visually complicated patterns can be found in many cultural heritages of the world. Islamic geometric patterns present us with one example of such visually complicated archaeological ornaments. As long-lived artifacts, these patterns have gone through many phases of construction, damage, and repair and are constantly subject to erosion and vandalism. The task of reconstructing these visually complicated ornaments faces many practical challenges. The main challenge is posed by the fact that archaeological reality often deals with ornaments that are broken, incomplete or hidden. Recognizing faint traces of eroded or missing parts proved to be an extremely difficult task. This is also combined with the need for specialized knowledge about the mathematical rules of patterns’ structure, in order to regenerate the missing data. This paper presents a methodology for reconstructing deteriorated Islamic geometric patterns; to predict the features that are not observed and output a complete reconstructed two-dimension accurate measurable model. The simulation process depends primarily on finding the parameters necessary to predict information, at other locations, based on the relationships embedded in the existing data and in the prior -knowledge of these relations. The aim is to build up from the fragmented data and from the historic and general knowledge, a model of the reconstructed object. The proposed methodology was proven to be successful in capturing the accurate structural geometry of many of the deteriorated ornaments on the Minaret Choli, Iraq. However, in the case of extremely deteriorated samples, the proposed methodology failed to recognize the correct geometry. The conceptual framework proposed by this paper can serve as a platform for developing professional tools for fast and efficient results.

  13. Modeling of genetic algorithms with a finite population

    NARCIS (Netherlands)

    C.H.M. van Kemenade

    1997-01-01

    textabstractCross-competition between non-overlapping building blocks can strongly influence the performance of evolutionary algorithms. The choice of the selection scheme can have a strong influence on the performance of a genetic algorithm. This paper describes a number of different genetic

  14. Transmuted Complementary Weibull Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Ahmed Z. A…fify

    2014-12-01

    Full Text Available This paper provides a new generalization of the complementary Weibull geometric distribution that introduced by Tojeiro et al. (2014, using the quadratic rank transmutation map studied by Shaw and Buckley (2007. The new distribution is referred to as transmuted complementary Weibull geometric distribution (TCWGD. The TCWG distribution includes as special cases the complementary Weibull geometric distribution (CWGD, complementary exponential geometric distribution(CEGD,Weibull distribution (WD and exponential distribution (ED. Various structural properties of the new distribution including moments, quantiles, moment generating function and RØnyi entropy of the subject distribution are derived. We proposed the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the ‡exibility of the transmuted version versus the complementary Weibull geometric distribution.

  15. Control algorithms and applications of the wavefront sensorless adaptive optics

    Science.gov (United States)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  16. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  17. Effect of tungsten-187 in primary coolant on dose rate build-up in Vandellos 2

    International Nuclear Information System (INIS)

    Fernandez Lillo, E.; Llovet, R.; Boronat, M.

    1994-01-01

    The present work proposes a relationship between the Cobalt-60 piping deposited activity and the relatively high levels of Tungsten-187 in the coolant of Vandellos 2. The conclusions of this work can be applicable to other plants, since it proposes a tool to estimate and quantify the contribution of stellite to the generation of Cobalt-60 and the radiation dose build-up. (authors). 7 figs., 6 refs

  18. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  19. Starting design for use in variance exchange algorithms | Iwundu ...

    African Journals Online (AJOL)

    A new method of constructing the initial design for use in variance exchange algorithms is presented. The method chooses support points to go into the design as measures of distances of the support points from the centre of the geometric region and of permutation-invariant sets. The initial design is as close as possible to ...

  20. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.