Numerical conformal mapping methods for exterior and doubly connected regions
Energy Technology Data Exchange (ETDEWEB)
DeLillo, T.K. [Wichita State Univ., KS (United States); Pfaltzgraff, J.A. [Univ. of North Carolina, Chapel Hill, NC (United States)
1996-12-31
Methods are presented and analyzed for approximating the conformal map from the exterior of the disk to the exterior a smooth, simple closed curve and from an annulus to a bounded, doubly connected region with smooth boundaries. The methods are Newton-like methods for computing the boundary correspondences and conformal moduli similar to Fornberg`s method for the interior of the disk. We show that the linear systems are discretizations of the identity plus a compact operator and, hence, that the conjugate gradient method converges superlinearly.
A nodal expansion method using conformal mapping for hexagonal geometry
International Nuclear Information System (INIS)
Chao, Y.A.; Shatilla, Y.A.
1993-01-01
Hexagonal nodal methods adopting the same transverse integration process used for square nodal methods face the subtle theoretical problem that this process leads to highly singular nonphysical terms in the diffusion equation. Lawrence, in developing the DIF3D-N code, tried to approximate the singular terms with relatively simple polynomials. In the HEX-NOD code, Wagner ignored the singularities to simplify the diffusion equation and introduced compensating terms in the nodal equations to restore the nodal balance relation. More recently developed hexagonal nodal codes, such as HEXPE-DITE and the hexagonal version of PANTHER, used methods similar to Wagner's. It will be shown that for light water reactor applications, these two different approximations significantly degraded the accuracy of the respective method as compared to the established square nodal methods. Alternatively, the method of conformal mapping was suggested to map a hexagon to a rectangle, with the unique feature of leaving the diffusion operator invariant, thereby fundamentally resolving the problems associated with transverse integration. This method is now implemented in the Westinghouse hexagonal nodal code ANC-H. In this paper we report on the results of comparing the three methods for a variety of problems via benchmarking against the fine-mesh finite difference code
DEFF Research Database (Denmark)
Reck, Kasper; Thomsen, Erik Vilain; Hansen, Ole
2011-01-01
. The solution of the mapped Helmholtz equation is found by solving an infinite series of Poisson equations using two dimensional Fourier series. The solution is entirely based on analytical expressions and is not mesh dependent. The analytical results are compared to a numerical (finite element method) solution......The scalar wave equation, or Helmholtz equation, describes within a certain approximation the electromagnetic field distribution in a given system. In this paper we show how to solve the Helmholtz equation in complex geometries using conformal mapping and the homotopy perturbation method...
Inversion theory and conformal mapping
Blair, David E
2000-01-01
It is rarely taught in an undergraduate or even graduate curriculum that the only conformal maps in Euclidean space of dimension greater than two are those generated by similarities and inversions in spheres. This is in stark contrast to the wealth of conformal maps in the plane. The principal aim of this text is to give a treatment of this paucity of conformal maps in higher dimensions. The exposition includes both an analytic proof in general dimension and a differential-geometric proof in dimension three. For completeness, enough complex analysis is developed to prove the abundance of conformal maps in the plane. In addition, the book develops inversion theory as a subject, along with the auxiliary theme of circle-preserving maps. A particular feature is the inclusion of a paper by Carath�odory with the remarkable result that any circle-preserving transformation is necessarily a M�bius transformation, not even the continuity of the transformation is assumed. The text is at the level of advanced undergr...
Conformal geometry and quasiregular mappings
Vuorinen, Matti
1988-01-01
This book is an introduction to the theory of spatial quasiregular mappings intended for the uninitiated reader. At the same time the book also addresses specialists in classical analysis and, in particular, geometric function theory. The text leads the reader to the frontier of current research and covers some most recent developments in the subject, previously scatterd through the literature. A major role in this monograph is played by certain conformal invariants which are solutions of extremal problems related to extremal lengths of curve families. These invariants are then applied to prove sharp distortion theorems for quasiregular mappings. One of these extremal problems of conformal geometry generalizes a classical two-dimensional problem of O. Teichmüller. The novel feature of the exposition is the way in which conformal invariants are applied and the sharp results obtained should be of considerable interest even in the two-dimensional particular case. This book combines the features of a textbook an...
A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow
International Nuclear Information System (INIS)
Baker, G.; Siegel, M.; Tanveer, S.
1995-01-01
We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. The situation is disastrous for numerical computation, as small roundoff errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out. 47 refs., 10 figs., 1 tab
Conformable variational iteration method
Directory of Open Access Journals (Sweden)
Omer Acan
2017-02-01
Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.
Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang
2017-05-01
This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.
Revisit the carpet cloak from optical conformal mapping
Li, Hui; Xu, Yadong; Wu, Qiannan; Chen, Huanyang
2013-01-01
The original carpet cloak [Phys. Rev. Lett. 101, 203901 (2008)] was designed by a numerical method, the quasi-conformal mapping. Therefore its refractive index profile was obtained numerically. In this letter, we propose a new carpet cloak based on the optical conformal mapping, with an analytical form of a refractive index profile, thereby facilitating future experimental designs.
Broadband illusion optical devices based on conformal mappings
Xiong, Zhan; Xu, Lin; Xu, Ya-Dong; Chen, Huan-Yang
2017-10-01
In this paper, we propose a simple method of illusion optics based on conformal mappings. By carefully developing designs with specific conformal mappings, one can make an object look like another with a significantly different shape. In addition, the illusion optical devices can work in a broadband of frequencies.
Understanding modern magnets through conformal mapping
International Nuclear Information System (INIS)
Halbach, K.
1989-10-01
I want to show with the help of a number of examples that conformal mapping is a unique and enormously powerful tool for thinking about, and solving, problems. Usually one has to write down only a few equations, and sometimes none at all exclamation point When I started getting involved in work for which conformal mapping seemed to be a powerful tool, I did not think that I would ever be able to use that technique successfully because it seemed to require a nearly encyclopedic memory, an impression that was strengthened when I saw K. Kober's Dictionary of Conformal Representations. This attitude changed when I started to realize that beyond the basics of the theory of a function of a complex variable, I needed to know only about a handful of conformal maps and procedures. Consequently, my second goal for this talk is to show that in most cases conformal mapping functions can be obtained by formulating the underlying physics appropriately. This means particularly that encyclopedic knowledge of conformal maps is not necessary for successful use of conformal mapping techniques. To demonstrate these facts I have chosen examples from an area of physics/engineering in which I am active, namely accelerator physics. In order to do that successfully I start with a brief introduction into high energy charged particle storage ring technology, even though not all examples used in this paper to elucidate my points come directly from this particular field of accelerator technology
Testing conformal mapping with kitchen aluminum foil
Haas, S.; Cooke, D. A.; Crivelli, P.
2016-01-01
We report an experimental verification of conformal mapping with kitchen aluminum foil. This experiment can be reproduced in any laboratory by undergraduate students and it is therefore an ideal experiment to introduce the concept of conformal mapping. The original problem was the distribution of the electric potential in a very long plate. The correct theoretical prediction was recently derived by A. Czarnecki (Can. J. Phys. 92, 1297 (2014)).
Conformal maps between pseudo-Finsler spaces
Voicu, Nicoleta
The paper aims to initiate a systematic study of conformal mappings between Finsler spacetimes and, more generally, between pseudo-Finsler spaces. This is done by extending several results in pseudo-Riemannian geometry which are necessary for field-theoretical applications and by proposing a technique that reduces some problems involving pseudo-Finslerian conformal vector fields to their pseudo-Riemannian counterparts. Also, we point out, by constructing classes of examples, that conformal groups of flat (locally Minkowskian) pseudo-Finsler spaces can be much richer than both flat Finslerian and pseudo-Euclidean conformal groups.
International Nuclear Information System (INIS)
RodrIguez, Arezky H; Handy, Carlos R; Trallero-Giner, C
2004-01-01
The suitability of conformal transformation (CT) analysis, and the eigenvalue moment method (EMM), for determining the eigenenergies and eigenfunctions of a quantum particle confined within a lens geometry, is reviewed and compared to the recent results by Even and Loualiche (2003 J. Phys.: Condens. Matter 15 8465). It is shown that CT and EMM define two accurate and versatile analytical/computational methods relevant to lens shaped regions of varying geometrical aspect ratios. (reply)
Numerical investigation on exterior conformal mappings with application to airfoils
International Nuclear Information System (INIS)
Mohamad Rashidi Md Razali; Hu Laey Nee
2000-01-01
A numerical method is described in computing a conformal map from an exterior region onto the exterior of the unit disk. The numerical method is based on a boundary integral equation which is similar to the Kerzman-Stein integral equation for interior mapping. Some examples show that numerical results of high accuracy can be obtained provided that the boundaries are smooth. This numerical method has been applied to the mapping airfoils. However, due to the fact that the parametric representation of an air foil is not known, a cubic spline interpolation method has been used. Some numerical examples with satisfying results have been obtained for the symmetrical and cambered airfoils. (Author)
Conformal mapping and convergence of Krylov iterations
Energy Technology Data Exchange (ETDEWEB)
Driscoll, T.A.; Trefethen, L.N. [Cornell Univ., Ithaca, NY (United States)
1994-12-31
Connections between conformal mapping and matrix iterations have been known for many years. The idea underlying these connections is as follows. Suppose the spectrum of a matrix or operator A is contained in a Jordan region E in the complex plane with 0 not an element of E. Let {phi}(z) denote a conformal map of the exterior of E onto the exterior of the unit disk, with {phi}{infinity} = {infinity}. Then 1/{vert_bar}{phi}(0){vert_bar} is an upper bound for the optimal asymptotic convergence factor of any Krylov subspace iteration. This idea can be made precise in various ways, depending on the matrix iterations, on whether A is finite or infinite dimensional, and on what bounds are assumed on the non-normality of A. This paper explores these connections for a variety of matrix examples, making use of a new MATLAB Schwarz-Christoffel Mapping Toolbox developed by the first author. Unlike the earlier Fortran Schwarz-Christoffel package SCPACK, the new toolbox computes exterior as well as interior Schwarz-Christoffel maps, making it easy to experiment with spectra that are not necessarily symmetric about an axis.
Conformal methods in general relativity
Valiente Kroon, Juan A
2016-01-01
This book offers a systematic exposition of conformal methods and how they can be used to study the global properties of solutions to the equations of Einstein's theory of gravity. It shows that combining these ideas with differential geometry can elucidate the existence and stability of the basic solutions of the theory. Introducing the differential geometric, spinorial and PDE background required to gain a deep understanding of conformal methods, this text provides an accessible account of key results in mathematical relativity over the last thirty years, including the stability of de Sitter and Minkowski spacetimes. For graduate students and researchers, this self-contained account includes useful visual models to help the reader grasp abstract concepts and a list of further reading, making this the perfect reference companion on the topic.
A conformal mapping approach to a root-clustering problem
International Nuclear Information System (INIS)
Melnikov, Gennady I; Dudarenko, Nataly A; Melnikov, Vitaly G
2014-01-01
This paper presents a new approach for matrix root-clustering in sophisticated and multiply-connected regions of the complex plane. The parametric sweeping method and a concept of the closed forbidden region covered by a set of modified three-parametrical Cassini regions are used. A conformal mapping approach was applied to formulate the main results of the paper. An application of the developed method to the problem of matrix root-clustering in a multiply connected region is shown for illustration
Use of conformal mapping to describe MHD wave propagation
International Nuclear Information System (INIS)
Bulanov, S.V.; Pegoraro, F.
1993-01-01
A method is proposed for finding explicit exact solutions of the magnetohydrodynamic equations describing the propagation of magnetoacoustic waves in a plasma in a magnetic potential that depends on two spatial coordinates. This method is based on the use of conformal mappings to transform the wave equation into an equation describing the propagation of waves in a uniform magnetic field. The basic properties of magnetoacoustic and Alfven waves near the critical points, magnetic separatrices, and in configuration with magnetic islands are discussed. Expressions are found for the dimensionless parameters which determine the relative roles of the plasma pressure, nonlinearity, and dissipation near the critical points. 30 refs
Harmonic Riemannian Maps on Locally Conformal Kaehler Manifolds
Indian Academy of Sciences (India)
We study harmonic Riemannian maps on locally conformal Kaehler manifolds ( l c K manifolds). We show that if a Riemannian holomorphic map between l c K manifolds is harmonic, then the Lee vector field of the domain belongs to the kernel of the Riemannian map under a condition. When the domain is Kaehler, we ...
The Relationship between Self-Assembly and Conformal Mappings
Duque, Carlos; Santangelo, Christian
The isotropic growth of a thin sheet has been used as a way to generate programmed shapes through controlled buckling. We discuss how conformal mappings, which are transformations that locally preserve angles, provide a way to quantify the area growth needed to produce a particular shape. A discrete version of the conformal map can be constructed from circle packings, which are maps between packings of circles whose contact network is preserved. This provides a link to the self-assembly of particles on curved surfaces. We performed simulations of attractive particles on a curved surface using molecular dynamics. The resulting particle configurations were used to generate the corresponding discrete conformal map, allowing us to quantify the degree of area distortion required to produce a particular shape by finding particle configurations that minimize the area distortion.
Conformal mapping calculation of railgun skin inductance
International Nuclear Information System (INIS)
Huerta, M.A.; Nearing, J.C.
1991-01-01
This paper considers the common rail arrangement consisting of two long, parallel, rectangular rails. The authors calculate the inductance per unit length L' in the short flight time limit where the skin depth is much smaller than any rail dimensions, the current is all on the rail surface, and the magnetic field does not penetrate the rails. The authors give the solution based on the Schwartz-Christoffel transformation that maps the boundaries of the problem into a simpler shape
Mapping the conformational free energy of aspartic acid in the gas phase and in aqueous solution.
Comitani, Federico; Rossi, Kevin; Ceriotti, Michele; Sanz, M Eugenia; Molteni, Carla
2017-04-14
The conformational free energy landscape of aspartic acid, a proteogenic amino acid involved in a wide variety of biological functions, was investigated as an example of the complexity that multiple rotatable bonds produce even in relatively simple molecules. To efficiently explore such a landscape, this molecule was studied in the neutral and zwitterionic forms, in the gas phase and in water solution, by means of molecular dynamics and the enhanced sampling method metadynamics with classical force-fields. Multi-dimensional free energy landscapes were reduced to bi-dimensional maps through the non-linear dimensionality reduction algorithm sketch-map to identify the energetically stable conformers and their interconnection paths. Quantum chemical calculations were then performed on the minimum free energy structures. Our procedure returned the low energy conformations observed experimentally in the gas phase with rotational spectroscopy [M. E. Sanz et al., Phys. Chem. Chem. Phys. 12, 3573 (2010)]. Moreover, it provided information on higher energy conformers not accessible to experiments and on the conformers in water. The comparison between different force-fields and quantum chemical data highlighted the importance of the underlying potential energy surface to accurately capture energy rankings. The combination of force-field based metadynamics, sketch-map analysis, and quantum chemical calculations was able to produce an exhaustive conformational exploration in a range of significant free energies that complements the experimental data. Similar protocols can be applied to larger peptides with complex conformational landscapes and would greatly benefit from the next generation of accurate force-fields.
Mapping the conformational free energy of aspartic acid in the gas phase and in aqueous solution
Comitani, Federico; Rossi, Kevin; Ceriotti, Michele; Sanz, M. Eugenia; Molteni, Carla
2017-04-01
The conformational free energy landscape of aspartic acid, a proteogenic amino acid involved in a wide variety of biological functions, was investigated as an example of the complexity that multiple rotatable bonds produce even in relatively simple molecules. To efficiently explore such a landscape, this molecule was studied in the neutral and zwitterionic forms, in the gas phase and in water solution, by means of molecular dynamics and the enhanced sampling method metadynamics with classical force-fields. Multi-dimensional free energy landscapes were reduced to bi-dimensional maps through the non-linear dimensionality reduction algorithm sketch-map to identify the energetically stable conformers and their interconnection paths. Quantum chemical calculations were then performed on the minimum free energy structures. Our procedure returned the low energy conformations observed experimentally in the gas phase with rotational spectroscopy [M. E. Sanz et al., Phys. Chem. Chem. Phys. 12, 3573 (2010)]. Moreover, it provided information on higher energy conformers not accessible to experiments and on the conformers in water. The comparison between different force-fields and quantum chemical data highlighted the importance of the underlying potential energy surface to accurately capture energy rankings. The combination of force-field based metadynamics, sketch-map analysis, and quantum chemical calculations was able to produce an exhaustive conformational exploration in a range of significant free energies that complements the experimental data. Similar protocols can be applied to larger peptides with complex conformational landscapes and would greatly benefit from the next generation of accurate force-fields.
Electromagnetic Problems Solving by Conformal Mapping: A Mathematical Operator for Optimization
Directory of Open Access Journals (Sweden)
Wesley Pacheco Calixto
2010-01-01
Full Text Available Having the property to modify only the geometry of a polygonal structure, preserving its physical magnitudes, the Conformal Mapping is an exceptional tool to solve electromagnetism problems with known boundary conditions. This work aims to introduce a new developed mathematical operator, based on polynomial extrapolation. This operator has the capacity to accelerate an optimization method applied in conformal mappings, to determinate the equipotential lines, the field lines, the capacitance, and the permeance of some polygonal geometry electrical devices with an inner dielectric of permittivity ε. The results obtained in this work are compared with other simulations performed by the software of finite elements method, Flux 2D.
The conformal method and the conformal thin-sandwich method are the same
International Nuclear Information System (INIS)
Maxwell, David
2014-01-01
The conformal method developed in the 1970s and the more recent Lagrangian and Hamiltonian conformal thin-sandwich methods are techniques for finding solutions of the Einstein constraint equations. We show that they are manifestations of a single conformal method: there is a straightforward way to convert back and forth between the parameters for these methods so that the corresponding solutions of the Einstein constraint equations agree. The unifying idea is the need to clearly distinguish tangent and cotangent vectors to the space of conformal classes on a manifold, and we introduce a vocabulary for working with these objects without reference to a particular representative background metric. As a consequence of these conceptual advantages, we demonstrate how to strengthen previous near-CMC (constant mean curvature) existence and non-existence theorems for the original conformal method to include metrics with scalar curvatures that change sign. (paper)
Moduli of families of curves for conformal and quasiconformal mappings
Vasil’ev, Alexander
2002-01-01
The monograph is concerned with the modulus of families of curves on Riemann surfaces and its applications to extremal problems for conformal, quasiconformal mappings, and the extension of the modulus onto Teichmüller spaces. The main part of the monograph deals with extremal problems for compact classes of univalent conformal and quasiconformal mappings. Many of them are grouped around two-point distortion theorems. Montel's functions and functions with fixed angular derivatives are also considered. The last portion of problems is directed to the extension of the modulus varying the complex structure of the underlying Riemann surface that sheds some new light on the metric problems of Teichmüller spaces.
Liu, Xiaofeng; Bai, Fang; Ouyang, Sisheng; Wang, Xicheng; Li, Honglin; Jiang, Hualiang
2009-03-31
Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105-112). Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 A to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 +/- 0.18 seconds per molecule) renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms other four multiple conformer generators in the case of
Directory of Open Access Journals (Sweden)
Li Honglin
2009-03-01
Full Text Available Abstract Background Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. Results The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105–112. Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 Å to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 ± 0.18 seconds per molecule renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. Conclusion On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms
Dipole-magnet field models based on a conformal map
Directory of Open Access Journals (Sweden)
P. L. Walstrom
2012-10-01
Full Text Available In general, generation of charged-particle transfer maps for conventional iron-pole-piece dipole magnets to third and higher order requires a model for the midplane field profile and its transverse derivatives (soft-edge model to high order and numerical integration of map coefficients. An exact treatment of the problem for a particular magnet requires use of measured magnetic data. However, in initial design of beam transport systems, users of charged-particle optics codes generally rely on magnet models built into the codes. Indeed, if maps to third order are adequate for the problem, an approximate analytic field model together with numerical map coefficient integration can capture the important features of the transfer map. The model described in this paper is based on the fact that, except at very large distances from the magnet, the magnetic field for parallel pole-face magnets with constant pole gap height and wide pole faces is basically two dimensional (2D. The field for all space outside of the pole pieces is given by a single (complex analytic expression and includes a parameter that controls the rate of falloff of the fringe field. Since the field function is analytic in the complex plane outside of the pole pieces, it satisfies two basic requirements of a field model for higher-order map codes: it is infinitely differentiable at the midplane and also a solution of the Laplace equation. It is apparently the only simple model available that combines an exponential approach to the central field with an inverse cubic falloff of field at large distances from the magnet in a single expression. The model is not intended for detailed fitting of magnetic field data, but for use in numerical map-generating codes for studying the effect of extended fringe fields on higher-order transfer maps. It is based on conformally mapping the area between the pole pieces to the upper half plane, and placing current filaments on the pole faces. An
Global mapping of DNA conformational flexibility on Saccharomyces cerevisiae.
Directory of Open Access Journals (Sweden)
Giulia Menconi
2015-04-01
Full Text Available In this study we provide the first comprehensive map of DNA conformational flexibility in Saccharomyces cerevisiae complete genome. Flexibility plays a key role in DNA supercoiling and DNA/protein binding, regulating DNA transcription, replication or repair. Specific interest in flexibility analysis concerns its relationship with human genome instability. Enrichment in flexible sequences has been detected in unstable regions of human genome defined fragile sites, where genes map and carry frequent deletions and rearrangements in cancer. Flexible sequences have been suggested to be the determinants of fragile gene proneness to breakage; however, their actual role and properties remain elusive. Our in silico analysis carried out genome-wide via the StabFlex algorithm, shows the conserved presence of highly flexible regions in budding yeast genome as well as in genomes of other Saccharomyces sensu stricto species. Flexibile peaks in S. cerevisiae identify 175 ORFs mapping on their 3'UTR, a region affecting mRNA translation, localization and stability. (TAn repeats of different extension shape the central structure of peaks and co-localize with polyadenylation efficiency element (EE signals. ORFs with flexible peaks share common features. Transcripts are characterized by decreased half-life: this is considered peculiar of genes involved in regulatory systems with high turnover; consistently, their function affects biological processes such as cell cycle regulation or stress response. Our findings support the functional importance of flexibility peaks, suggesting that the flexible sequence may be derived by an expansion of canonical TAYRTA polyadenylation efficiency element. The flexible (TAn repeat amplification could be the outcome of an evolutionary neofunctionalization leading to a differential 3'-end processing and expression regulation in genes with peculiar function. Our study provides a new support to the functional role of flexibility in
Global mapping of DNA conformational flexibility on Saccharomyces cerevisiae.
Menconi, Giulia; Bedini, Andrea; Barale, Roberto; Sbrana, Isabella
2015-04-01
In this study we provide the first comprehensive map of DNA conformational flexibility in Saccharomyces cerevisiae complete genome. Flexibility plays a key role in DNA supercoiling and DNA/protein binding, regulating DNA transcription, replication or repair. Specific interest in flexibility analysis concerns its relationship with human genome instability. Enrichment in flexible sequences has been detected in unstable regions of human genome defined fragile sites, where genes map and carry frequent deletions and rearrangements in cancer. Flexible sequences have been suggested to be the determinants of fragile gene proneness to breakage; however, their actual role and properties remain elusive. Our in silico analysis carried out genome-wide via the StabFlex algorithm, shows the conserved presence of highly flexible regions in budding yeast genome as well as in genomes of other Saccharomyces sensu stricto species. Flexibile peaks in S. cerevisiae identify 175 ORFs mapping on their 3'UTR, a region affecting mRNA translation, localization and stability. (TA)n repeats of different extension shape the central structure of peaks and co-localize with polyadenylation efficiency element (EE) signals. ORFs with flexible peaks share common features. Transcripts are characterized by decreased half-life: this is considered peculiar of genes involved in regulatory systems with high turnover; consistently, their function affects biological processes such as cell cycle regulation or stress response. Our findings support the functional importance of flexibility peaks, suggesting that the flexible sequence may be derived by an expansion of canonical TAYRTA polyadenylation efficiency element. The flexible (TA)n repeat amplification could be the outcome of an evolutionary neofunctionalization leading to a differential 3'-end processing and expression regulation in genes with peculiar function. Our study provides a new support to the functional role of flexibility in genomes and a
Evolution families of conformal mappings with fixed points and the Löwner-Kufarev equation
International Nuclear Information System (INIS)
Goryainov, V V
2015-01-01
The paper is concerned with evolution families of conformal mappings of the unit disc to itself that fix an interior point and a boundary point. Conditions are obtained for the evolution families to be differentiable, and an existence and uniqueness theorem for an evolution equation is proved. A convergence theorem is established which describes the topology of locally uniform convergence of evolution families in terms of infinitesimal generating functions. The main result in this paper is the embedding theorem which shows that any conformal mapping of the unit disc to itself with two fixed points can be embedded into a differentiable evolution family of such mappings. This result extends the range of the parametric method in the theory of univalent functions. In this way the problem of the mutual change of the derivative at an interior point and the angular derivative at a fixed point on the boundary is solved for a class of mappings of the unit disc to itself. In particular, the rotation theorem is established for this class of mappings. Bibliography: 27 titles
Conformal maps and group contractions in nuclear structure
International Nuclear Information System (INIS)
Bonatsos, D.
2011-01-01
In mathematics, a conformal map is a function which preserves angles. We show how this procedure can be used in the framework of the Bohr Hamiltonian, leading to a Hamiltonian in a curved space, in which the mass depends on the nuclear deformation β, while it remains independent of the collective variable γ and the three Euler angles. This Hamiltonian is proved to be equivalent to that obtained using techniques of Supersymmetric Quantum Mechanics. Group contraction is a procedure in which a symmetry group is reduced into a group of lower symmetry in a certain limiting case. Examples are provided in the large boson number limit of the Interacting Boson Approximation (IBA) model by a) the contraction of the SU(3) algebra into the [R 5 ]SO(3) algebra of the rigid rotator, consisting of the angular momentum operators forming SO(3), plus 5 mutually commuting quantities, the quadrupole operators, b) the contraction of the O(6) algebra into the [R 5 ]SO(5) algebra of the γ-unstable rotator. We show how contractions can be used for constructing symmetry lines in the interior of the symmetry triangle of the IBA model. (author)
DEFF Research Database (Denmark)
Frimurer, Thomas M.; Günther, Peter H.; Sørensen, Morten Dahl
1999-01-01
adiabatic mapping, conformational change, essentialdynamics, free energy simulations, Kunitz type inhibitor *ga3(VI)......adiabatic mapping, conformational change, essentialdynamics, free energy simulations, Kunitz type inhibitor *ga3(VI)...
Yunus, A A M; Murid, A H M; Nasser, M M S
2014-02-08
This paper presents a boundary integral equation method with the adjoint generalized Neumann kernel for computing conformal mapping of unbounded multiply connected regions and its inverse onto several classes of canonical regions. For each canonical region, two integral equations are solved before one can approximate the boundary values of the mapping function. Cauchy's-type integrals are used for computing the mapping function and its inverse for interior points. This method also works for regions with piecewise smooth boundaries. Three examples are given to illustrate the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Fabio Stella
2013-09-01
Full Text Available An approach that combines Self-Organizing maps, hierarchical clustering and network components is presented, aimed at comparing protein conformational ensembles obtained from multiple Molecular Dynamic simulations. As a first result the original ensembles can be summarized by using only the representative conformations of the clusters obtained. In addition the network components analysis allows to discover and interpret the dynamic behavior of the conformations won by each neuron. The results showed the ability of this approach to efficiently derive a functional interpretation of the protein dynamics described by the original conformational ensemble, highlighting its potential as a support for protein engineering.
The hypersurfaces with conformal normal Gauss map in Hn+1 and S1n+1
Directory of Open Access Journals (Sweden)
Shuguo Shi
2008-03-01
Full Text Available In this paper, we introduce the fourth fundamental forms for hypersurfaces in Hn+1 and space-like hypersurfaces in S1n+1, and discuss the conformality of the normal Gauss map of the hypersurfaces in Hn+1 and S1n+1. Particularly, we discuss the surfaces with conformal normal Gauss map in H³ and S³1, and prove a duality property. We give a Weierstrass representation formula for space-like surfaces in S³1 with conformal normal Gauss map. We also state the similar results for time-like surfaces in S³1. Some examples of surfaces in S³1 with conformal normal Gauss map are given and a fully nonlinear equation of Monge-Ampère type for the graphs in S³1 with conformal normal Gauss map is derived.Neste artigo, introduzimos a quarta forma fundamental de uma hipersuperfície em Hn+1 de uma hipersuperfície tipo-espaço em S1n+1, e discutimos a conformalidade da aplicação normal de Gauss de tais hipersuperfícies. Em particular, investigamos o caso de superfícies com aplicação normal de Gauss conforme em H³ e S³1, e provamos um teorema de dualidade. Apresentamos uma representação de Weierstrass para superfícies tipo-espaço em S³1 com aplicação de Gauss conforme. Enunciamos também resultados semelhantes para superfícies tipo-tempo em S³1. São dados alguns exemplos de superfícies em S³1 com aplicações de Gauss conformes, e é deduzida uma equação totalmente não-linear do tipo Monge-Ampère para gráficos em S³1 com aplicações de Gauss conformes.
Experimental conformational energy maps of proteins and peptides.
Balaji, Govardhan A; Nagendra, H G; Balaji, Vitukudi N; Rao, Shashidhar N
2017-06-01
We have presented an extensive analysis of the peptide backbone dihedral angles in the PDB structures and computed experimental Ramachandran plots for their distributions seen under a various constraints on X-ray resolution, representativeness at different sequence identity percentages, and hydrogen bonding distances. These experimental distributions have been converted into isoenergy contour plots using the approach employed previously by F. M. Pohl. This has led to the identification of energetically favored minima in the Ramachandran (ϕ, ψ) plots in which global minima are predominantly observed either in the right-handed α-helical or the polyproline II regions. Further, we have identified low energy pathways for transitions between various minima in the (ϕ,ψ) plots. We have compared and presented the experimental plots with published theoretical plots obtained from both molecular mechanics and quantum mechanical approaches. In addition, we have developed and employed a root mean square deviation (RMSD) metric for isoenergy contours in various ranges, as a measure (in kcal.mol -1 ) to compare any two plots and determine the extent of correlation and similarity between their isoenergy contours. In general, we observe a greater degree of compatibility with experimental plots for energy maps obtained from molecular mechanics methods compared to most quantum mechanical methods. The experimental energy plots we have investigated could be helpful in refining protein structures obtained from X-ray, NMR, and electron microscopy and in refining force field parameters to enable simulations of peptide and protein structures that have higher degree of consistency with experiments. Proteins 2017; 85:979-1001. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
CD2 probe infrared method for determining polymethylene chain conformation
International Nuclear Information System (INIS)
Maroncelli, M.; Strauss, H.L.; Snyder, R.G.
1985-01-01
The rocking mode frequency of a CD 2 group substituted in a polymethylene chain is sensitive to conformation in the immediate vicinity of the CD 2 group. This sensitivity forms the basis of a commonly used infrared method for determining site-specific conformation in polymethylene systems. In the present work, the CD 2 probe method has been extended and quantified with the use of infrared data on model CD 2 -substituted n-alkanes. The frequency of the CD 2 rocking band is determined primarily by the conformation of adjoining CC bonds, i.e., by tt, gt, and gg pairs. However, we have found that there are significant frequency shifts associated with other factors. These include the conformation of the next nearest CC bonds, both with the CD 2 positioned at the end and in the interior of the chain, and chain length. In addition, the ratio of the absorptivities of the tt to gt bands has been established. These results enable the method to provide new details about the conformation of the chains in polymethylene systems and reliable estimates of the concentrations of specific kinds of short conformational sequences. 14 references, 6 figures, 2 tables
Conformational and functional analysis of molecular dynamics trajectories by Self-Organising Maps
Directory of Open Access Journals (Sweden)
Stella Fabio
2011-05-01
Full Text Available Abstract Background Molecular dynamics (MD simulations are powerful tools to investigate the conformational dynamics of proteins that is often a critical element of their function. Identification of functionally relevant conformations is generally done clustering the large ensemble of structures that are generated. Recently, Self-Organising Maps (SOMs were reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data mining problems. We present a novel strategy to analyse and compare conformational ensembles of protein domains using a two-level approach that combines SOMs and hierarchical clustering. Results The conformational dynamics of the α-spectrin SH3 protein domain and six single mutants were analysed by MD simulations. The Cα's Cartesian coordinates of conformations sampled in the essential space were used as input data vectors for SOM training, then complete linkage clustering was performed on the SOM prototype vectors. A specific protocol to optimize a SOM for structural ensembles was proposed: the optimal SOM was selected by means of a Taguchi experimental design plan applied to different data sets, and the optimal sampling rate of the MD trajectory was selected. The proposed two-level approach was applied to single trajectories of the SH3 domain independently as well as to groups of them at the same time. The results demonstrated the potential of this approach in the analysis of large ensembles of molecular structures: the possibility of producing a topological mapping of the conformational space in a simple 2D visualisation, as well as of effectively highlighting differences in the conformational dynamics directly related to biological functions. Conclusions The use of a two-level approach combining SOMs and hierarchical clustering for conformational analysis of structural ensembles of proteins was proposed. It can easily be extended to other study cases and to
Conformational and functional analysis of molecular dynamics trajectories by Self-Organising Maps
2011-01-01
Background Molecular dynamics (MD) simulations are powerful tools to investigate the conformational dynamics of proteins that is often a critical element of their function. Identification of functionally relevant conformations is generally done clustering the large ensemble of structures that are generated. Recently, Self-Organising Maps (SOMs) were reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data mining problems. We present a novel strategy to analyse and compare conformational ensembles of protein domains using a two-level approach that combines SOMs and hierarchical clustering. Results The conformational dynamics of the α-spectrin SH3 protein domain and six single mutants were analysed by MD simulations. The Cα's Cartesian coordinates of conformations sampled in the essential space were used as input data vectors for SOM training, then complete linkage clustering was performed on the SOM prototype vectors. A specific protocol to optimize a SOM for structural ensembles was proposed: the optimal SOM was selected by means of a Taguchi experimental design plan applied to different data sets, and the optimal sampling rate of the MD trajectory was selected. The proposed two-level approach was applied to single trajectories of the SH3 domain independently as well as to groups of them at the same time. The results demonstrated the potential of this approach in the analysis of large ensembles of molecular structures: the possibility of producing a topological mapping of the conformational space in a simple 2D visualisation, as well as of effectively highlighting differences in the conformational dynamics directly related to biological functions. Conclusions The use of a two-level approach combining SOMs and hierarchical clustering for conformational analysis of structural ensembles of proteins was proposed. It can easily be extended to other study cases and to conformational ensembles from
Fluctuation Flooding Method (FFM) for accelerating conformational transitions of proteins
Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru
2014-03-01
A powerful conformational sampling method for accelerating structural transitions of proteins, "Fluctuation Flooding Method (FFM)," is proposed. In FFM, cycles of the following steps enhance the transitions: (i) extractions of largely fluctuating snapshots along anisotropic modes obtained from trajectories of multiple independent molecular dynamics (MD) simulations and (ii) conformational re-sampling of the snapshots via re-generations of initial velocities when re-starting MD simulations. In an application to bacteriophage T4 lysozyme, FFM successfully accelerated the open-closed transition with the 6 ns simulation starting solely from the open state, although the 1-μs canonical MD simulation failed to sample such a rare event.
Moment methods for nonlinear maps
International Nuclear Information System (INIS)
Pusch, G.D.; Atomic Energy of Canada Ltd., Chalk River, ON
1993-01-01
It is shown that Differential Algebra (DA) may be used to push moments of distributions through a map, at a computational cost per moment comparable to pushing a single particle. The algorithm is independent of order, and whether or not the map is symplectic. Starting from the known result that moment-vectors transform linearly - like a tensor - even under a nonlinear map, I suggest that the form of the moment transformation rule indicates that the moment-vectors are elements of the dual to DA-vector space. I propose several methods of manipulating moments and constructing invariants using DA. I close with speculations on how DA might be used to ''close the circle'' to solve the inverse moment problem, yielding an entirely DA-and-moment-based space-charge code. (Author)
On an application of conformal maps to inequalities for rational functions
International Nuclear Information System (INIS)
Dubinin, V N
2002-01-01
Using classical properties of conformal maps, we get new exact inequalities for rational functions with prescribed poles. In particular, we prove a new Bernstein-type inequality, an inequality for Blaschke products and a theorem that generalizes the Turan inequality for polynomials. The estimates obtained strengthen some familiar inequalities of Videnskii and Rusak. They are also related to recent results of Borwein, Erdelyi, Li, Mohapatra, Rodriguez, Aziz and others
Heat transfer analysis in internally-cooled fuel elements by means of a conformal mapping approach
International Nuclear Information System (INIS)
Sarmiento, G.S.; Laura, P.A.A.
1981-01-01
The present paper deals with an approximate solution of the steady-state heat conduction problem in internally cooled fuel elements of fast breeder reactors. Explicit expressions for the dimensionless temperature distribution in terms of the governing physical and geometrical parameters are determined by means of a coupled conformal mapping-variational approach. The results obtained are found to be in very good agreement with those calculated by means of a finite element code. (orig.)
Determination of alpha(s)(M(tau)(2)): conformal mapping approach
Czech Academy of Sciences Publication Activity Database
Caprini, I.; Fischer, Jan
2011-01-01
Roč. 218, - (2011), s. 128-133 ISSN 0920-5632. [11th International Workshop on Tau Lepton Physics. Manchester, 13.09.2010-17.09.2010] R&D Projects: GA MŠk LC527; GA MŠk LA08015 Institutional research plan: CEZ:AV0Z10100502 Keywords : perturbative QCD expansion * conformal mapping * hadronic decay of the tau lepton Subject RIV: BF - Elementary Particles and High Energy Physics
On the conformal equivalence of harmonic maps and exponentially harmonic maps
International Nuclear Information System (INIS)
Hong Minchun.
1991-06-01
Suppose that (M,g) and (N,h) are compact smooth Riemannian manifolds without boundaries. For m = dim M ≥3, and Φ: (M,g) → (N,h) is exponentially harmonic, there exists a smooth metric g-tilde conformally equivalent to g such that Φ: (M,g-tilde) → (N,h) is harmonic. (author). 7 refs
Bootstrapping conformal field theories with the extremal functional method.
El-Showk, Sheer; Paulos, Miguel F
2013-12-13
The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.
International Nuclear Information System (INIS)
Wang Yue; Wang Jian-Guo; Chen Zai-Gao
2015-01-01
Based on conformal construction of physical model in a three-dimensional Cartesian grid, an integral-based conformal convolutional perfectly matched layer (CPML) is given for solving the truncation problem of the open port when the enlarged cell technique conformal finite-difference time-domain (ECT-CFDTD) method is used to simulate the wave propagation inside a perfect electric conductor (PEC) waveguide. The algorithm has the same numerical stability as the ECT-CFDTD method. For the long-time propagation problems of an evanescent wave in a waveguide, several numerical simulations are performed to analyze the reflection error by sweeping the constitutive parameters of the integral-based conformal CPML. Our numerical results show that the integral-based conformal CPML can be used to efficiently truncate the open port of the waveguide. (paper)
Divergence-Conforming Discontinuous Galerkin Methods and $C^0$ Interior Penalty Methods
Kanschat, Guido; Sharma, Natasha
2014-01-01
© 2014 Society for Industrial and Applied Mathematics. In this paper, we show that recently developed divergence-conforming methods for the Stokes problem have discrete stream functions. These stream functions in turn solve a continuous interior
Statistical methods in physical mapping
International Nuclear Information System (INIS)
Nelson, D.O.
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work
Statistical methods in physical mapping
Energy Technology Data Exchange (ETDEWEB)
Nelson, David O. [Univ. of California, Berkeley, CA (United States)
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work.
Kamachi, Takashi; Yoshizawa, Kazunari
2016-02-22
A conformational search program for finding low-energy conformations of large noncovalent complexes has been developed. A quantitatively reliable semiempirical quantum mechanical PM6-DH+ method, which is able to accurately describe noncovalent interactions at a low computational cost, was employed in contrast to conventional conformational search programs in which molecular mechanical methods are usually adopted. Our approach is based on the low-mode method whereby an initial structure is perturbed along one of its low-mode eigenvectors to generate new conformations. This method was applied to determine the most stable conformation of transition state for enantioselective alkylation by the Maruoka and cinchona alkaloid catalysts and Hantzsch ester hydrogenation of imines by chiral phosphoric acid. Besides successfully reproducing the previously reported most stable DFT conformations, the conformational search with the semiempirical quantum mechanical calculations newly discovered a more stable conformation at a low computational cost.
Methods to Measure Map Readability
Harrie, Lars
2009-01-01
Creation of maps in real-time web services introduces challenges concerning map readability. Therefore we must introduce analytical measures controlling the readability. The aim of this study is to develop and evaluate analytical readability measures with the help of user tests.
A Continuation Method for Weakly Kannan Maps
Directory of Open Access Journals (Sweden)
Ariza-Ruiz David
2010-01-01
Full Text Available The first continuation method for contractive maps in the setting of a metric space was given by Granas. Later, Frigon extended Granas theorem to the class of weakly contractive maps, and recently Agarwal and O'Regan have given the corresponding result for a certain type of quasicontractions which includes maps of Kannan type. In this note we introduce the concept of weakly Kannan maps and give a fixed point theorem, and then a continuation method, for this class of maps.
Accelerated convergence of perturbative QCD by conformal mappings in the Borel plane
International Nuclear Information System (INIS)
Caprini, I.; Fischer, J.
1998-01-01
The behaviour of the large order terms in perturbative QCD received much attention in recent years. The presence of instantons and certain classes of Feynman diagrams lead to increasing coefficients of the perturbative expansion of the QCD Green functions, making this series divergent and even Borel non-summable. In the present paper we adopt a definite prescription for the Borel summation and investigate the improvement of the low order expansion by using some information about the behaviour of the large order coefficients. We use the technique of conformal mappings to extend the convergence region of the Borel series, and exploit the behaviour of the Borel transform near the first renormalons. Our approach improves previous work where only the ultraviolet renormalons were considered. The polarization function, relevant for the hadronic τ decay, which allows the determinations of the strong coupling constant a s (m τ 2 ) is used. We consider the Adler function D(s), i.e. the logarithmic derivative of the vacuum polarization for massless quarks, and its QCD perturbative expansion (D(a s )) in terms of the running coupling a s (-s). The first 3 coefficients D n of Adler function D(s) are known from explicit calculations, while for large n they are expected to have a factorial growth. By applying the Borel method with the Principal Value (PV) prescription to avoid the infrared renormalons, we write D(a s ) in terms of its Borel transform B(u). The Borel integral is given as a function of a s for a model function resembling the Borel transform of the Adler function in the large β 0 limit. The data obtained by truncating the expansion at N=3 which corresponds to the physical situation are presented. Even at such low values of N our method gives very good results (the improvement increases with N, since the optimality is an asymptotic feature). Using this technique we calculated also the running coupling constant a s (m τ 2 ), for which we obtained the value 0
On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models
Xu, S.; Wang, B.; Liu, J.
2015-10-01
In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.
Mapping Mixed Methods Research: Methods, Measures, and Meaning
Wheeldon, J.
2010-01-01
This article explores how concept maps and mind maps can be used as data collection tools in mixed methods research to combine the clarity of quantitative counts with the nuance of qualitative reflections. Based on more traditional mixed methods approaches, this article details how the use of pre/post concept maps can be used to design qualitative…
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
Conformal Nets II: Conformal Blocks
Bartels, Arthur; Douglas, Christopher L.; Henriques, André
2017-08-01
Conformal nets provide a mathematical formalism for conformal field theory. Associated to a conformal net with finite index, we give a construction of the `bundle of conformal blocks', a representation of the mapping class groupoid of closed topological surfaces into the category of finite-dimensional projective Hilbert spaces. We also construct infinite-dimensional spaces of conformal blocks for topological surfaces with smooth boundary. We prove that the conformal blocks satisfy a factorization formula for gluing surfaces along circles, and an analogous formula for gluing surfaces along intervals. We use this interval factorization property to give a new proof of the modularity of the category of representations of a conformal net.
The "Set Map" Method of Navigation.
Tippett, Julian
1998-01-01
Explains the "set map" method of using the baseplate compass to solve walkers' navigational needs as opposed to the 1-2-3 method for taking a bearing. The map, with the compass permanently clipped to it, is rotated to the position in which its features have the same orientation as their counterparts on the ground. Includes directions and…
Chui, S. T.; Chen, Xinzhong; Liu, Mengkun; Lin, Zhifang; Zi, Jian
2018-02-01
We study the response of a conical metallic surface to an external electromagnetic (em) field by representing the fields in basis functions containing the integrable singularity at the tip of the cone. A fast analytical solution is obtained by the conformal mapping between the cone and a round disk. We apply our calculation to the scattering-type scanning near-field optical microscope (s-SNOM) and successfully quantify the elastic light scattering from a vibrating metallic tip over a uniform sample. We find that the field-induced charge distribution consists of localized terms at the tip and the base and an extended bulk term along the body of the cone far away from the tip. In recent s-SNOM experiments at the visible and infrared range (600 nm to 1 μ m ) the fundamental of the demodulated near-field signal is found to be much larger than the higher harmonics whereas at THz range (100 μ m to 3 mm) the fundamental becomes comparable to the higher harmonics. We find that the localized tip charge dominates the contribution to the higher harmonics and becomes larger for the THz experiments, thus providing an intuitive understanding of the origin of the near-field signals. We demonstrate the application of our method by extracting a two-dimensional effective dielectric constant map from the s-SNOM image of a finite metallic disk, where the variation comes from the charge density induced by the em field.
More on the conformal mapping of quasi-local masses: the Hawking–Hayward case
International Nuclear Information System (INIS)
Hammad, Fayçal
2016-01-01
The conformal transformation of the Hawking–Hayward quasi-local mass is re-examined. It has been found recently that the conformal transformation of the latter exhibits the ‘wrong’ conformal factor compared to the way usual masses transform under conformal transformations of spacetime. We show, in analogy with what was found recently for the Misner–Sharp mass, that unlike the purely geometric definition of the Hawking–Hayward mass, the latter exhibits the ‘right’ conformal factor whenever expressed in terms of its material content via the field equations. The case of conformally invariant scalar–tensor theories of gravity is also examined. The equivalence between the Misner–Sharp mass and the Hawking–Hayward mass for spherically symmetric spacetimes manifests itself by giving identical peculiar behaviors under conformal transformations. (paper)
Method and apparatus for enhancing vortex pinning by conformal crystal arrays
Janko, Boldizsar; Reichhardt, Cynthia; Reichhardt, Charles; Ray, Dipanjan
2015-07-14
Disclosed is a method and apparatus for strongly enhancing vortex pinning by conformal crystal arrays. The conformal crystal array is constructed by a conformal transformation of a hexagonal lattice, producing a non-uniform structure with a gradient where the local six-fold coordination of the pinning sites is preserved, and with an arching effect. The conformal pinning arrays produce significantly enhanced vortex pinning over a much wider range of field than that found for other vortex pinning geometries with an equivalent number of vortex pinning sites, such as random, square, and triangular.
Possibilities and methods for mapping air pollution
Energy Technology Data Exchange (ETDEWEB)
LeBlanc, F
1971-01-01
For various reasons lichens seem to be much more sensitive to air pollution than flowering plants. Various methods to map the long-range effect of phytotoxicants on epiphytic lichens and mosses have been proposed. This paper outlines a few of these and proposes a new method. In Sudbury, Ontario, vegetation has been greatly affected by sulfur dioxide emanating from three huge smelters. The author shows that his map based on the response of lichens matches quite well with another map from the same area based on continuous SO/sub 2/ monitoring. The advantage of the biological map is that it took two weeks to accumulate the data required while the other one took ten years.
Saha, Suman; Das, Saptarshi; Das, Shantanu; Gupta, Amitava
2012-09-01
A novel conformal mapping based fractional order (FO) methodology is developed in this paper for tuning existing classical (Integer Order) Proportional Integral Derivative (PID) controllers especially for sluggish and oscillatory second order systems. The conventional pole placement tuning via Linear Quadratic Regulator (LQR) method is extended for open loop oscillatory systems as well. The locations of the open loop zeros of a fractional order PID (FOPID or PIλDμ) controller have been approximated in this paper vis-à-vis a LQR tuned conventional integer order PID controller, to achieve equivalent integer order PID control system. This approach eases the implementation of analog/digital realization of a FOPID controller with its integer order counterpart along with the advantages of fractional order controller preserved. It is shown here in the paper that decrease in the integro-differential operators of the FOPID/PIλDμ controller pushes the open loop zeros of the equivalent PID controller towards greater damping regions which gives a trajectory of the controller zeros and dominant closed loop poles. This trajectory is termed as "M-curve". This phenomena is used to design a two-stage tuning algorithm which reduces the existing PID controller's effort in a significant manner compared to that with a single stage LQR based pole placement method at a desired closed loop damping and frequency.
Misra, M; Murakami, H; Tonouchi, M
2003-01-01
We have studied the tuning properties of a high-temperature superconducting (HTS) half-wavelength coplanar waveguide (CPW) resonator operating at 5 GHz. The tuning schemes are based on flip-chip bonding of an electrically tunable ferroelectric (FE) thin film and a mechanically movable low-loss single crystal on top of the resonator. Using the conformal mapping method, closed-form analytical expressions have been derived for a flip-chip bonded conductor-backed and top-shielded CPW transmission line. The obtained expressions are used to analyse the volume effect of the FE thin film and the gap between the flip-chip and the CPW resonator on the tuning properties of the device. It has been found that large frequency modulation of the resonator produces impedance mismatch, which can considerably enhance the insertion loss of high-performance HTS microwave devices. Analysis also suggests that, for electrically tunable devices, flip-chip bonded FE thin films on HTS CPW devices provide a relatively higher performance...
Method II : The energy-momentum map
Broer, H.; Hoveijn, I.; Lunter, G.; Vegter, G.
2003-01-01
In this chapter we apply the energy–momentum map reduction method to the same class of systems as in Chap. 2, namely two degree-of-freedom systems with optional symmetry, near equilibrium and close to resonance. We calculate the tangent space and nondegeneracy conditions for the 1:2, 1:3 and 1:4
Divergence-Conforming Discontinuous Galerkin Methods and $C^0$ Interior Penalty Methods
Kanschat, Guido
2014-01-01
© 2014 Society for Industrial and Applied Mathematics. In this paper, we show that recently developed divergence-conforming methods for the Stokes problem have discrete stream functions. These stream functions in turn solve a continuous interior penalty problem for biharmonic equations. The equivalence is established for the most common methods in two dimensions based on interior penalty terms. Then, extensions of the concept to discontinuous Galerkin methods defined through lifting operators, for different weak formulations of the Stokes problem, and to three dimensions are discussed. Application of the equivalence result yields an optimal error estimate for the Stokes velocity without involving the pressure. Conversely, combined with a recent multigrid method for Stokes flow, we obtain a simple and uniform preconditioner for harmonic problems with simply supported and clamped boundary.
Introducing a method for mapping recreational experience
DEFF Research Database (Denmark)
Lindholst, Andrej Christian; Dempsey, Nicola; Burton, Mel
2013-01-01
spaces provide and support a range of recreational experiences. The exploration reported here is based on a short review of the methods background and an application in two test sites in Sheffield, South Yorkshire in early summer 2010. This paper critically appraises the application of rec......-mapping’, an innovative method of analysing and mapping positive recreational experiences in urban green spaces is explored and piloted within the UK planning context. Originating in the Nordic countries, this on-site method can provide urban planners and designers with data about the extent to which specific green......-mapping at smaller spatial scales and recommends further explorations within the UK planning context, as the method adds to existing open space assessment by providing a unique layer of information to analyse more fully the recreational qualities of urban green spaces....
Comparative characteristic of the methods of protein antigens epitope mapping
Directory of Open Access Journals (Sweden)
O. Yu. Galkin
2014-08-01
Full Text Available Comparative analysis of experimental methods of epitope mapping of protein antigens has been carried out. The vast majority of known techniques are involved in immunochemical study of the interaction of protein molecules or peptides with antibodies of corresponding specificity. The most effective and widely applicable methodological techniques are those that use synthetic and genetically engineered peptides. Over the past 30 years, these groups of methods have travelled a notable evolutionary path up to the maximum automation and the detection of antigenic determinants of various types (linear and conformational epitopes, and mimotopes. Most of epitope searching algorithms were integrated into a computer program, which greatly facilitates the analysis of experimental data and makes it possible to create spatial models. It is possible to use comparative epitope mapping for solving the applied problems; this less time-consuming method is based on the analysis of competition between different antibodies interactions with the same antigen. The physical method of antigenic structure study is X-ray analysis of antigen-antibody complexes, which may be applied only to crystallizing proteins, and nuclear magnetic resonance.
Conformational analysis and vibrational studies of ethylenediamine-d4, using DFT method
International Nuclear Information System (INIS)
Catikkas, B.
2010-01-01
In this work, conformational analysis and quantum chemical calculations of ethylenediamine-d4 were carried out. The geometry optimization and the geometric parameters (bond length, bond angle and tortion angle) were calculated. The Infrared and Raman frequencies of fundamental modes of the most stable conformer were determined. Calculations were carried out by using the MPW1PW91/6-311+G(d,p) method and Gaussian03 and GaussView3.0 programs. Populations of the conformers was calculated. Vibrational assignments of the title molecule were calculated by using Scaled Quantum Mechanical (SQM) analysis. Calculated values were compared with the experimental ones.
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua Michael
We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution
The calculations of small molecular conformation energy differences by density functional method
Topol, I. A.; Burt, S. K.
1993-03-01
The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.
Directory of Open Access Journals (Sweden)
Rudy Clausen
2015-09-01
Full Text Available An important goal in molecular biology is to understand functional changes upon single-point mutations in proteins. Doing so through a detailed characterization of structure spaces and underlying energy landscapes is desirable but continues to challenge methods based on Molecular Dynamics. In this paper we propose a novel algorithm, SIfTER, which is based instead on stochastic optimization to circumvent the computational challenge of exploring the breadth of a protein's structure space. SIfTER is a data-driven evolutionary algorithm, leveraging experimentally-available structures of wildtype and variant sequences of a protein to define a reduced search space from where to efficiently draw samples corresponding to novel structures not directly observed in the wet laboratory. The main advantage of SIfTER is its ability to rapidly generate conformational ensembles, thus allowing mapping and juxtaposing landscapes of variant sequences and relating observed differences to functional changes. We apply SIfTER to variant sequences of the H-Ras catalytic domain, due to the prominent role of the Ras protein in signaling pathways that control cell proliferation, its well-studied conformational switching, and abundance of documented mutations in several human tumors. Many Ras mutations are oncogenic, but detailed energy landscapes have not been reported until now. Analysis of SIfTER-computed energy landscapes for the wildtype and two oncogenic variants, G12V and Q61L, suggests that these mutations cause constitutive activation through two different mechanisms. G12V directly affects binding specificity while leaving the energy landscape largely unchanged, whereas Q61L has pronounced, starker effects on the landscape. An implementation of SIfTER is made available at http://www.cs.gmu.edu/~ashehu/?q=OurTools. We believe SIfTER is useful to the community to answer the question of how sequence mutations affect the function of a protein, when there is an
Conformal growth method of ferroelectric materials for multifunctional composites
Bowland, Christopher Charles
Multifunctional composites are the next generation of composites and aim to simultaneously meet multiple performance objectives to create system-level performance enhancements. Current fiber-reinforced composites have offered improved efficiency and performance through weight reduction and increased strength. However, these composites satisfy singular performance objectives. Therefore, the concept of multifunctional composites was developed as an approach to create components in a system that serve multiple functions. These composites aim to reduce the required components in a system by integrating unifunctional components together thus reducing the weight and complexity of the system as a whole. This work offers an approach to create multifunctional composites through the development of a structural, multifunctional fiber. This is achieved by synthesizing a ferroelectric material on the surface of carbon fiber. In this work, a two-step hydrothermal reaction is developed for synthesizing a conformal film of barium titanate (BaTiO3) on the surface of carbon fiber. A fundamental understanding of this hydrothermal process is performed on planar substrates leading to the development of processing parameters that result in epitaxial-type growth of highly-aligned BaTiO3 nanowires. This work establishes the hydrothermal reaction as a powerful synthesis technique for generating nanostructured BaTiO3 on carbon fiber creating a novel, multifunctional fiber. A reaction optimization process leads to the development of parameters that stabilize tetragonal phase BaTiO3 without the need for subsequent heat treatments. The application potential of these fibers is illustrated with both single fibers and woven fabrics. Single fiber cantilever beams are fabricated and subjected to vibrations to determine its voltage output with the ultimate goal of producing an air flow sensor. Carbon fiber reinforced composite integration is carried out by scaling up the hydrothermal reaction to
M. Rosa-Garrido (Manuel); Chapski, D.J. (Douglas J.); Schmitt, A.D. (Anthony D.); Kimball, T.H. (Todd H.); Karbassi, E. (Elaheh); Monte, E. (Emma); Balderas, E. (Enrique); Pellegrini, M. (Matteo); Shih, T.-T. (Tsai-Ting); Soehalim, E. (Elizabeth); D.A. Liem (David); Ping, P. (Peipei); N.J. Galjart (Niels); Ren, S. (Shuxun); Wang, Y. (Yibin); Ren, B. (Bing); Vondriska, T.M. (Thomas M.)
2017-01-01
textabstractBACKGROUND: Cardiovascular disease is associated with epigenomic changes in the heart; however, the endogenous structure of cardiac myocyte chromatin has never been determined.METHODS: To investigate the mechanisms of epigenomic function in the heart, genome-wide chromatin conformation
Zheng, Yongjie
2012-01-01
Software architecture plays an increasingly important role in complex software development. Its further application, however, is challenged by the fact that software architecture, over time, is often found not conformant to its implementation. This is usually caused by frequent development changes made to both artifacts. Against this background,…
Energy Technology Data Exchange (ETDEWEB)
Jaidane, S [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1968-04-01
These two methods allow the determination of the shape of the poles in magnets, for a given field distribution in the air-gap. First method: The principle of the method consists to create the desired law of field by means of current sheets in which one can adjust the density given in a polynomial form. For the right distribution of these currents, the equipotential corresponding to the magnetic potential of the excitation coils is calculated. The pole profile of the H or C magnet identified with this equipotential line will finally take the place of the distribution of the current sheets used in the calculation. Steel permeability is assumed to be infinite and Foucault current effects are neglected in the case of variable fields. Second method: It consists to find a conformal representation that maps the pole profile plane upon the upper half of another plane where the equipotentials are two half straight lines, and where the field problems are easier to solve. Steel permeability is also considered to be infinite and the coils far from the pole faces. This known method has been applied to be compared with the first one. (author) [French] Ces deux methodes consistent a determiner la forme des pieces polaires d'aimants pour une distribution de champ determinee a l'avance dans l'entrefer. Premiere methode: Le principe de la methode consiste a creer la loi de champ desiree par l'intermediaire de nappes de courant dont on peut ajuster la densite exprimee sous une forme polynominale. Pour une distribution convenable de ces courants, on calcule l'equipotentielle correspondant au potentiel magnetique des bobines d'excitation. Le profil polaire d'un aimant en H ou C identifie a l'equipotentielle se substitue finalement a la repartition des nappes de courant utilisee dans la methode de calcul. La permeabilite de l'acier est supposee infinie et les courants de Foucault sont negliges dans le cas des champs variables. Seconde methode: Elle consiste a trouver une transformation
Method and system for a network mapping service
Bynum, Leo
2017-10-17
A method and system of publishing a map includes providing access to a plurality of map data files or mapping services between at least one publisher and at least one subscriber; defining a map in a map context comprising parameters and descriptors to substantially duplicate a map by reference to mutually accessible data or mapping services, publishing a map to a channel in a table file on server; accessing the channel by at least one subscriber, transmitting the mapping context from the server to the at least one subscriber, executing the map context by the at least one subscriber, and generating the map on a display software associated with the at least one subscriber by reconstituting the map from the references and other data in the mapping context.
Directory of Open Access Journals (Sweden)
Gómez Marcela
2009-12-01
Full Text Available Abstract Background Expressed sequence tags (ESTs are an important source of gene-based markers such as those based on insertion-deletions (Indels or single-nucleotide polymorphisms (SNPs. Several gel based methods have been reported for the detection of sequence variants, however they have not been widely exploited in common bean, an important legume crop of the developing world. The objectives of this project were to develop and map EST based markers using analysis of single strand conformation polymorphisms (SSCPs, to create a transcript map for common bean and to compare synteny of the common bean map with sequenced chromosomes of other legumes. Results A set of 418 EST based amplicons were evaluated for parental polymorphisms using the SSCP technique and 26% of these presented a clear conformational or size polymorphism between Andean and Mesoamerican genotypes. The amplicon based markers were then used for genetic mapping with segregation analysis performed in the DOR364 × G19833 recombinant inbred line (RIL population. A total of 118 new marker loci were placed into an integrated molecular map for common bean consisting of 288 markers. Of these, 218 were used for synteny analysis and 186 presented homology with segments of the soybean genome with an e-value lower than 7 × 10-12. The synteny analysis with soybean showed a mosaic pattern of syntenic blocks with most segments of any one common bean linkage group associated with two soybean chromosomes. The analysis with Medicago truncatula and Lotus japonicus presented fewer syntenic regions consistent with the more distant phylogenetic relationship between the galegoid and phaseoloid legumes. Conclusion The SSCP technique is a useful and inexpensive alternative to other SNP or Indel detection techniques for saturating the common bean genetic map with functional markers that may be useful in marker assisted selection. In addition, the genetic markers based on ESTs allowed the construction
Galeano, Carlos H; Fernández, Andrea C; Gómez, Marcela; Blair, Matthew W
2009-12-23
Expressed sequence tags (ESTs) are an important source of gene-based markers such as those based on insertion-deletions (Indels) or single-nucleotide polymorphisms (SNPs). Several gel based methods have been reported for the detection of sequence variants, however they have not been widely exploited in common bean, an important legume crop of the developing world. The objectives of this project were to develop and map EST based markers using analysis of single strand conformation polymorphisms (SSCPs), to create a transcript map for common bean and to compare synteny of the common bean map with sequenced chromosomes of other legumes. A set of 418 EST based amplicons were evaluated for parental polymorphisms using the SSCP technique and 26% of these presented a clear conformational or size polymorphism between Andean and Mesoamerican genotypes. The amplicon based markers were then used for genetic mapping with segregation analysis performed in the DOR364 x G19833 recombinant inbred line (RIL) population. A total of 118 new marker loci were placed into an integrated molecular map for common bean consisting of 288 markers. Of these, 218 were used for synteny analysis and 186 presented homology with segments of the soybean genome with an e-value lower than 7 x 10-12. The synteny analysis with soybean showed a mosaic pattern of syntenic blocks with most segments of any one common bean linkage group associated with two soybean chromosomes. The analysis with Medicago truncatula and Lotus japonicus presented fewer syntenic regions consistent with the more distant phylogenetic relationship between the galegoid and phaseoloid legumes. The SSCP technique is a useful and inexpensive alternative to other SNP or Indel detection techniques for saturating the common bean genetic map with functional markers that may be useful in marker assisted selection. In addition, the genetic markers based on ESTs allowed the construction of a transcript map and given their high conservation
Case studies: Soil mapping using multiple methods
Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald
2010-05-01
Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful
Zu, Ying; Mandelbaum, Rachel
2018-05-01
Recent studies suggest that the quenching properties of galaxies are correlated over several megaparsecs. The large-scale `galactic conformity' phenomenon around central galaxies has been regarded as a potential signature of `galaxy assembly bias' or `pre-heating', both of which interpret conformity as a result of direct environmental effects acting on galaxy formation. Building on the iHOD halo quenching framework developed in Zu and Mandelbaum, we discover that our fiducial halo mass quenching model, without any galaxy assembly bias, can successfully explain the overall environmental dependence and the conformity of galaxy colours in Sloan Digital Sky Survey, as measured by the mark correlation functions of galaxy colours and the red galaxy fractions around isolated primaries, respectively. Our fiducial iHOD halo quenching mock also correctly predicts the differences in the spatial clustering and galaxy-galaxy lensing signals between the more versus less red galaxy subsamples, split by the red-sequence ridge line at fixed stellar mass. Meanwhile, models that tie galaxy colours fully or partially to halo assembly bias have difficulties in matching all these observables simultaneously. Therefore, we demonstrate that the observed environmental dependence of galaxy colours can be naturally explained by the combination of (1) halo quenching and (2) the variation of halo mass function with environment - an indirect environmental effect mediated by two separate physical processes.
Zhu, D.; Zhu, H.; Luo, Y.; Chen, X.
2008-12-01
We use a new finite difference method (FDM) and the slip-weakening law to model the rupture dynamics of a non-planar fault embedded in a 3-D elastic media with free surface. The new FDM, based on boundary- conforming grid, sets up the mapping equations between the curvilinear coordinate and the Cartesian coordinate and transforms irregular physical space to regular computational space; it also employs a higher- order non-staggered DRP/opt MacCormack scheme which is of low dispersion and low dissipation so that the high accuracy and stability of our rupture modeling are guaranteed. Compared with the previous methods, not only we can compute the spontaneous rupture of an arbitrarily shaped fault, but also can model the influence of the surface topography on the rupture process of earthquake. In order to verify the feasibility of this method, we compared our results and other previous results, and found out they matched perfectly. Thanks to the boundary-conforming FDM, problems such as dynamic rupture with arbitrary dip, strike and rake over an arbitrary curved plane can be handled; and supershear or subshear rupture can be simulated with different parameters such as the initial stresses and the critical slip displacement Dc. Besides, our rupture modeling is economical to be implemented owing to its high efficiency and does not suffer from displacement leakage. With the help of inversion data of rupture by field observations, this method is convenient to model rupture processes and seismograms of natural earthquakes.
Modelling of bow-tie microstrip antennas using modified locally conformal FDTD method
George, J.
2000-01-01
An analysis of bow-tie microstrip antennas is presented based on the use of the modified locally conformal finite-difference time-domain (FDTD) method. This approach enables the number of cells along the antenna length and width to be chosen independently of the antenna central width, which helps to
Brand, B; Baes, C; Mayer, M; Reinsch, N; Seidenspinner, T; Thaller, G; Kühn, Ch
2010-03-01
Linkage, linkage disequilibrium, and combined linkage and linkage disequilibrium analyses were performed to map quantitative trait loci (QTL) affecting calving and conformation traits on Bos taurus autosome 18 (BTA18) in the German Holstein population. Six paternal half-sib families consisting of a total of 1,054 animals were genotyped on 28 genetic markers in the telomeric region on BTA18 spanning approximately 30 Mb. Calving traits, body type traits, and udder type traits were investigated. Using univariately estimated breeding values, maternal and direct effects on calving ease and stillbirth were analyzed separately for first- and further-parity calvings. The QTL initially identified by separate linkage and linkage disequilibrium analyses could be confirmed by a combined linkage and linkage disequilibrium analysis for udder composite index, udder depth, fore udder attachment, front teat placement, body depth, rump angle, and direct effects on calving ease and stillbirth. Concurrence of QTL peaks and a similar shape of restricted log-likelihood ratio profiles were observed between udder type traits and for body depth and calving traits, respectively. Association analyses were performed for markers flanking the most likely QTL positions by applying a mixed model including a fixed allele effect of the maternally inherited allele and a random polygenic effect. Results indicated that microsatellite marker DIK4234 (located at 53.3 Mb) is associated with maternal effects on stillbirth, direct effects on calving ease, and body depth. A comparison of effects for maternally inherited DIK4234 alleles indicated a favorable, positive correlation of maternal and direct effects on calving. Additionally, the association of maternally inherited DIK4234 marker alleles with body depth implied that conformation traits might provide the functional background of the QTL for calving traits. For udder type traits, the strong coincidence of QTL peaks and the position of the QTL in a
CrowdMapping: A Crowdsourcing-Based Terminology Mapping Method for Medical Data Standardization.
Mao, Huajian; Chi, Chenyang; Huang, Boyu; Meng, Haibin; Yu, Jinghui; Zhao, Dongsheng
2017-01-01
Standardized terminology is the prerequisite of data exchange in analysis of clinical processes. However, data from different electronic health record systems are based on idiosyncratic terminology systems, especially when the data is from different hospitals and healthcare organizations. Terminology standardization is necessary for the medical data analysis. We propose a crowdsourcing-based terminology mapping method, CrowdMapping, to standardize the terminology in medical data. CrowdMapping uses a confidential model to determine how terminologies are mapped to a standard system, like ICD-10. The model uses mappings from different health care organizations and evaluates the diversity of the mapping to determine a more sophisticated mapping rule. Further, the CrowdMapping model enables users to rate the mapping result and interact with the model evaluation. CrowdMapping is a work-in-progress system, we present initial results mapping terminologies.
Directory of Open Access Journals (Sweden)
Jiang Hualiang
2010-11-01
Full Text Available Abstract Background Conformational sampling for small molecules plays an essential role in drug discovery research pipeline. Based on multi-objective evolution algorithm (MOEA, we have developed a conformational generation method called Cyndi in the previous study. In this work, in addition to Tripos force field in the previous version, Cyndi was updated by incorporation of MMFF94 force field to assess the conformational energy more rationally. With two force fields against a larger dataset of 742 bioactive conformations of small ligands extracted from PDB, a comparative analysis was performed between pure force field based method (FFBM and multiple empirical criteria based method (MECBM hybrided with different force fields. Results Our analysis reveals that incorporating multiple empirical rules can significantly improve the accuracy of conformational generation. MECBM, which takes both empirical and force field criteria as the objective functions, can reproduce about 54% (within 1Å RMSD of the bioactive conformations in the 742-molecule testset, much higher than that of pure force field method (FFBM, about 37%. On the other hand, MECBM achieved a more complete and efficient sampling of the conformational space because the average size of unique conformations ensemble per molecule is about 6 times larger than that of FFBM, while the time scale for conformational generation is nearly the same as FFBM. Furthermore, as a complementary comparison study between the methods with and without empirical biases, we also tested the performance of the three conformational generation methods in MacroModel in combination with different force fields. Compared with the methods in MacroModel, MECBM is more competitive in retrieving the bioactive conformations in light of accuracy but has much lower computational cost. Conclusions By incorporating different energy terms with several empirical criteria, the MECBM method can produce more reasonable conformational
Further results for crack-edge mappings by ray methods
International Nuclear Information System (INIS)
Norris, A.N.; Achenbach, J.D.; Ahlberg, L.; Tittman, B.R.
1984-01-01
This chapter discusses further extensions of the local edge mapping method to the pulse-echo case and to configurations of water-immersed specimens and transducers. Crack edges are mapped by the use of arrival times of edge-diffracted signals. Topics considered include local edge mapping in a homogeneous medium, local edge mapping algorithms, local edge mapping through an interface, and edge mapping through an interface using synthetic data. Local edge mapping is iterative, with two or three iterations required for convergence
Mapping the Conformational Dynamics of E-selectin upon Interaction with its Ligands
Aleisa, Fajr A
2013-05-15
Selectins are key adhesion molecules responsible for initiating a multistep process that leads a cell out of the blood circulation and into a tissue or organ. The adhesion of cells (expressing ligands) to the endothelium (expressing the selectin i.e.,E-selectin) occurs through spatio-temporally regulated interactions that are mediated by multiple intra- and inter-cellular components. The mechanism of cell adhesion is investigated primarily using ensemble-based experiments, which provides indirect information about how individual molecules work in such a complex system. Recent developments in single-molecule (SM) fluorescence detection allow for the visualization of individual molecules with a good spatio-temporal resolution nanometer spatial resolution and millisecond time resolution). Furthermore, advanced SM fluorescence techniques such as Förster Resonance Energy Transfer (FRET) and super-resolution microscopy provide unique opportunities to obtain information about nanometer-scale conformational dynamics of proteins as well as nano-scale architectures of biological samples. Therefore, the state-of-the-art SM techniques are powerful tools for investigating complex biological system such as the mechanism of cell adhesion. In this project, several constructs of fluorescently labeled E-selectin will be used to study the conformational dynamics of E-selectin binding to its ligand(s) using SM-FRET and combination of SM-FRET and force microscopy. These studies will be beneficial to fully understand the mechanistic details of cell adhesion and migration of cells using the established model system of hematopoietic stem cells (HSCs) adhesion to the selectin expressing endothelial cells (such as the E-selectin expressing endothelial cells in the bone marrow).
Mapcurves: a quantitative method for comparing categorical maps.
William W. Hargrove; M. Hoffman Forrest; Paul F. Hessburg
2006-01-01
We present Mapcurves, a quantitative goodness-of-fit (GOF) method that unambiguously shows the degree of spatial concordance between two or more categorical maps. Mapcurves graphically and quantitatively evaluate the degree of fit among any number of maps and quantify a GOF for each polygon, as well as the entire map. The Mapcurve method indicates a perfect fit even if...
Frank, Martin
2015-01-01
Complex carbohydrates usually have a large number of rotatable bonds and consequently a large number of theoretically possible conformations can be generated (combinatorial explosion). The application of systematic search methods for conformational analysis of carbohydrates is therefore limited to disaccharides and trisaccharides in a routine analysis. An alternative approach is to use Monte-Carlo methods or (high-temperature) molecular dynamics (MD) simulations to explore the conformational space of complex carbohydrates. This chapter describes how to use MD simulation data to perform a conformational analysis (conformational maps, hydrogen bonds) of oligosaccharides and how to build realistic 3D structures of large polysaccharides using Conformational Analysis Tools (CAT).
A method to correct coordinate distortion in EBSD maps
International Nuclear Information System (INIS)
Zhang, Y.B.; Elbrønd, A.; Lin, F.X.
2014-01-01
Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. - Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction
A time-delayed method for controlling chaotic maps
International Nuclear Information System (INIS)
Chen Maoyin; Zhou Donghua; Shang Yun
2005-01-01
Combining the repetitive learning strategy and the optimality principle, this Letter proposes a time-delayed method to control chaotic maps. This method can effectively stabilize unstable periodic orbits within chaotic attractors in the sense of least mean square. Numerical simulations of some chaotic maps verify the effectiveness of this method
Lu, Chao; Li, Xubin; Wu, Dongsheng; Zheng, Lianqing; Yang, Wei
2016-01-12
In aqueous solution, solute conformational transitions are governed by intimate interplays of the fluctuations of solute-solute, solute-water, and water-water interactions. To promote molecular fluctuations to enhance sampling of essential conformational changes, a common strategy is to construct an expanded Hamiltonian through a series of Hamiltonian perturbations and thereby broaden the distribution of certain interactions of focus. Due to a lack of active sampling of configuration response to Hamiltonian transitions, it is challenging for common expanded Hamiltonian methods to robustly explore solvent mediated rare conformational events. The orthogonal space sampling (OSS) scheme, as exemplified by the orthogonal space random walk and orthogonal space tempering methods, provides a general framework for synchronous acceleration of slow configuration responses. To more effectively sample conformational transitions in aqueous solution, in this work, we devised a generalized orthogonal space tempering (gOST) algorithm. Specifically, in the Hamiltonian perturbation part, a solvent-accessible-surface-area-dependent term is introduced to implicitly perturb near-solute water-water fluctuations; more importantly in the orthogonal space response part, the generalized force order parameter is generalized as a two-dimension order parameter set, in which essential solute-solvent and solute-solute components are separately treated. The gOST algorithm is evaluated through a molecular dynamics simulation study on the explicitly solvated deca-alanine (Ala10) peptide. On the basis of a fully automated sampling protocol, the gOST simulation enabled repetitive folding and unfolding of the solvated peptide within a single continuous trajectory and allowed for detailed constructions of Ala10 folding/unfolding free energy surfaces. The gOST result reveals that solvent cooperative fluctuations play a pivotal role in Ala10 folding/unfolding transitions. In addition, our assessment
Functional methods and mappings of dissipative quantum systems
International Nuclear Information System (INIS)
Baur, H.
2006-01-01
In the first part of this work we extract the algebraic structure behind the method of the influence functional in the context of dissipative quantum mechanics. Special emphasis was put on the transition from a quantum mechanical description to a classical one, since it allows a deeper understanding of the measurement-process. This is tightly connected with the transition from a microscopic to a macroscopic world where the former one is described by the rules of quantum mechanics whereas the latter follows the rules of classical mechanics. In addition we show how the results of the influence functional method can be interpreted as a stochastical process, which in turn allows an easy comparison with the well known time development of a quantum mechanical system by use of the Schroedinger equation. In the following we examine the tight-binding approximation of models of which their hamiltionian shows discrete eigenstates in position space and where transitions between those states are suppressed so that propagation either is described by tunneling or by thermal activation. In the framework of dissipative quantum mechanics this leads to a tremendous simplification of the effective description of the system since instead of looking at the full history of all paths in the path integral description, we only have to look at all possible jump times and the possible corresponding set of weights for the jump direction, which is much easier to handle both analytically and numerically. In addition we deal with the mapping and the connection of dissipative quantum mechanical models with ones in quantum field theory and in particular models in statistical field theory. As an example we mention conformal invariance in two dimensions which always becomes relevant if a statistical system only has local interaction and is invariant under scaling. (orig.)
A multi-scale method of mapping urban influence
Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters
2009-01-01
Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...
Ivanov, Ilia N [Knoxville, TN; Simpson, John T [Clinton, IN
2012-01-24
A method of making a large area conformable shape structure comprises drawing a plurality of tubes to form a plurality of drawn tubes, and cutting the plurality of drawn tubes into cut drawn tubes of a predetermined shape. The cut drawn tubes have a first end and a second end along the longitudinal direction of the cut drawn tubes. The method further comprises conforming the first end of the cut drawn tubes into a predetermined curve to form the large area conformable shape structure, wherein the cut drawn tubes contain a material.
Backtracking Method of Coloring Administrative Maps Considering Visual Perception Rules
Directory of Open Access Journals (Sweden)
WEI Zhiwei
2018-03-01
Full Text Available Color design in administrative maps should incorporate and balance area configuration, color harmony, and users' purposes. Based on visual perceptual rules, this paper quantifies color harmony, color contrast and perceptual balance in coloring administrative maps, and a model is suggested to evaluate the coloring quality after color template is selected. Then a backtracking method based on area balance is proposed to compute colored areas. Experiments show that this method can well meet visual perceptual rules while coloring administrative maps, and can be used for later map design.
Optimization of the uniformity of a metal flow during continuous extrusion by the Conform method
Lyubanova, A. Sh.; Gorokhov, Yu. V.; Solopko, I. V.; Ziborov, A. Yu.
2010-03-01
The scheme of plastic deformation of a billet in a container is considered as part of continuous extrusion by the Conform method. A mathematical model of the motion of a viscoplastic Bingham liquid is used to determine the metal velocity distribution in the plastic-deformation zone. As a result, the optimum angle between the longitudinal axes of the die and container is estimated. This angle is found to be one of the main factors affecting the nonuniformity of deformation when a metal flows into the die. The calculated results are compared to experimental data.
Mapped Fourier Methods for stiff problems in toroidal geometry
Guillard , Herve
2014-01-01
Fourier spectral or pseudo-spectral methods are usually extremely efficient for periodic problems. However this efficiency is lost if the solutions have zones of rapid variations or internal layers. For these cases, a large number of Fourier modes are required and this makes the Fourier method unpractical in many cases. This work investigates the use of mapped Fourier method as a way to circumvent this problem. Mapped Fourier method uses instead of the usual Fourier interpolant the compositio...
A method to correct coordinate distortion in EBSD maps
DEFF Research Database (Denmark)
Zhang, Yubin; Elbrønd, Andreas Benjamin; Lin, Fengxiang
2014-01-01
Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after...... the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct...
Janssen, Bä rbel; Kanschat, Guido
2011-01-01
A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method's convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Chao-Yie Yang
Full Text Available The interleukin-1 receptor (IL-1R is the founding member of the interleukin 1 receptor family which activates innate immune response by its binding to cytokines. Reports showed dysregulation of cytokine production leads to aberrant immune cells activation which contributes to auto-inflammatory disorders and diseases. Current therapeutic strategies focus on utilizing antibodies or chimeric cytokine biologics. The large protein-protein interaction interface between cytokine receptor and cytokine poses a challenge in identifying binding sites for small molecule inhibitor development. Based on the significant conformational change of IL-1R type 1 (IL-1R1 ectodomain upon binding to different ligands observed in crystal structures, we hypothesized that transient small molecule binding sites may exist when IL-1R1 undergoes conformational transition and thus suitable for inhibitor development. Here, we employed accelerated molecular dynamics (MD simulation to efficiently sample conformational space of IL-1R1 ectodomain. Representative IL-1R1 ectodomain conformations determined from the hierarchy cluster analysis were analyzed by the SiteMap program which leads to identify small molecule binding sites at the protein-protein interaction interface and allosteric modulator locations. The cosolvent mapping analysis using phenol as the probe molecule further confirms the allosteric modulator site as a binding hotspot. Eight highest ranked fragment molecules identified from in silico screening at the modulator site were evaluated by MD simulations. Four of them restricted the IL-1R1 dynamical motion to inactive conformational space. The strategy from this study, subject to in vitro experimental validation, can be useful to identify small molecule compounds targeting the allosteric modulator sites of IL-1R and prevent IL-1R from binding to cytokine by trapping IL-1R in inactive conformations.
Linear Algebraic Method for Non-Linear Map Analysis
International Nuclear Information System (INIS)
Yu, L.; Nash, B.
2009-01-01
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
Directory of Open Access Journals (Sweden)
Frauendiener Jörg
2000-08-01
Full Text Available The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, ``conformal infinity'' is related with almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved out of physical issues, namely the need to understand gravitational radiation and isolated systems within the theory of gravitation and how it lends itself very naturally to solve radiation problems in numerical relativity. The fundamental concept of null-infinity is introduced. Friedrich's regular conformal field equations are presented and various initial value problems for them are discussed. Finally, it is shown that the conformal field equations provide a very powerful method within numerical relativity to study global problems such as gravitational wave propagation and detection.
Frauendiener, Jörg
2004-01-01
The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, "conformal infinity" is related to almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved from physical issues, namely the need to understand gravitational radiation and isolated systems within the theory of gravitation, and how it lends itself very naturally to the solution of radiation problems in numerical relativity. The fundamental concept of null-infinity is introduced. Friedrich's regular conformal field equations are presented and various initial value problems for them are discussed. Finally, it is shown that the conformal field equations provide a very powerful method within numerical relativity to study global problems such as gravitational wave propagation and detection.
Directory of Open Access Journals (Sweden)
Frauendiener Jörg
2004-01-01
Full Text Available The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, 'conformal infinity' is related to almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved from physical issues, namely the need to understand gravitational radiation and isolated systems within the theory of gravitation, and how it lends itself very naturally to the solution of radiation problems in numerical relativity. The fundamental concept of null-infinity is introduced. Friedrich's regular conformal field equations are presented and various initial value problems for them are discussed. Finally, it is shown that the conformal field equations provide a very powerful method within numerical relativity to study global problems such as gravitational wave propagation and detection.
Directory of Open Access Journals (Sweden)
Alice Jernigan
2015-01-01
Full Text Available Capillary electrophoresis single-strand conformational polymorphism (CE-SSCP was explored as a fast and inexpensive method to differentiate both prokaryotic (blue-green and eukaryotic (green and brown algae. A selection of two blue-green algae (Nostoc muscorum and Anabaena inaequalis, five green algae (Chlorella vulgaris, Oedogonium foveolatum, Mougeotia sp., Scenedesmus quadricauda, and Ulothrix fimbriata, and one brown algae (Ectocarpus sp. were examined and CE-SSCP electropherogram “fingerprints” were compared to each other for two variable regions of either the 16S or 18S rDNA gene. The electropherogram patterns were remarkably stable and consistent for each particular species. The patterns were unique to each species, although some common features were observed between the different types of algae. CE-SSCP could be a useful method for monitoring changes in an algae species over time as potential shifts in species occurred.
International Nuclear Information System (INIS)
Lu Yong; Song, Paul Y.; Li Shidong; Spelbring, Danny R.; Vijayakumar, Srinivasan; Haraf, Daniel J.; Chen, George T.Y.
1995-01-01
Purpose: To develop a method of analyzing rectal surface area irradiated and rectal complications in prostate conformal radiotherapy. Methods and Materials: Dose-surface histograms of the rectum, which state the rectal surface area irradiated to any given dose, were calculated for a group of 27 patients treated with a four-field box technique to a total (tumor minimum) dose ranging from 68 to 70 Gy. Occurrences of rectal toxicities as defined by the Radiation Therapy Oncology Group (RTOG) were recorded and examined in terms of dose and rectal surface area irradiated. For a specified end point of rectal complication, the complication probability was analyzed as a function of dose irradiated to a fixed rectal area, and as a function of area receiving a fixed dose. Lyman's model of normal tissue complication probability (NTCP) was used to fit the data. Results: The observed occurrences of rectal complications appear to depend on the rectal surface area irradiated to a given dose level. The patient distribution of each toxicity grade exhibits a maximum as a function of percentage surface area irradiated, and the maximum moves to higher values of percentage surface area as the toxicity grade increases. The dependence of the NTCP for the specified end point on dose and percentage surface area irradiated was fitted to Lyman's NTCP model with a set of parameters. The curvature of the NTCP as a function of the surface area suggests that the rectum is a parallel structured organ. Conclusions: The described method of analyzing rectal surface area irradiated yields interesting insight into understanding rectal complications in prostate conformal radiotherapy. Application of the method to a larger patient data set has the potential to facilitate the construction of a full dose-surface-complication relationship, which would be most useful in guiding clinical practice
Datasprints as a method for Controversy Mapping
DEFF Research Database (Denmark)
Jensen, Torben Elgaard; Munk, Anders Kristian; Bach, Daniel
2017-01-01
A datasprint is an intensive 3-5 day workshop that brings together humanistic researchers, data experts, and stakeholders from a selected field. Together, the participants visualize and analyse a collection of data sets, which have been prepared before the datasprint. In the beginning of a datasp......A datasprint is an intensive 3-5 day workshop that brings together humanistic researchers, data experts, and stakeholders from a selected field. Together, the participants visualize and analyse a collection of data sets, which have been prepared before the datasprint. In the beginning...... of a datasprint, stakeholders present their understandings and views of the field in question. Following this, the workshop participants explore how the prepared data may shed new light on the field. The final products of a datasprint are prototypes of analyses or digital products that forms the basis for future...... collaboration between the partners. Since 2015, DIGHUMLAB has sponsored a special interest group in controversy mapping. Datasprints have proved to be a very productive format for controversy making and for creating dialogue and joint projects between humanistic researchers....
System and method for image mapping and visual attention
Peters, II, Richard A. (Inventor)
2011-01-01
A method is described for mapping dense sensory data to a Sensory Ego Sphere (SES). Methods are also described for finding and ranking areas of interest in the images that form a complete visual scene on an SES. Further, attentional processing of image data is best done by performing attentional processing on individual full-size images from the image sequence, mapping each attentional location to the nearest node, and then summing all attentional locations at each node.
a Mapping Method of Slam Based on Look up Table
Wang, Z.; Li, J.; Wang, A.; Wang, J.
2017-09-01
In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.
Janssen, Bärbel
2011-01-01
A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.
Method to planarize three-dimensional structures to enable conformal electrodes
Nikolic, Rebecca J; Conway, Adam M; Graff, Robert T; Reinhardt, Catherine; Voss, Lars F; Shao, Qinghui
2012-11-20
Methods for fabricating three-dimensional PIN structures having conformal electrodes are provided, as well as the structures themselves. The structures include a first layer and an array of pillars with cavity regions between the pillars. A first end of each pillar is in contact with the first layer. A segment is formed on the second end of each pillar. The cavity regions are filled with a fill material, which may be a functional material such as a neutron sensitive material. The fill material covers each segment. A portion of the fill material is etched back to produce an exposed portion of the segment. A first electrode is deposited onto the fill material and each exposed segment, thereby forming a conductive layer that provides a common contact to each the exposed segment. A second electrode is deposited onto the first layer.
International Nuclear Information System (INIS)
Gorobchenko, O.A.; Nikolov, O.T.; Gatash, S.V.
2006-01-01
In this article, the influence of γ-irradiation and temperature on albumin and fibrinogen conformation and dielectric properties of protein solutions have been studied by the microwave dielectric method. Both the values of the real part ε' (dielectric permittivity) and the imaginary part ε'' (dielectric losses) of the complex dielectric permittivity of the aqueous solution of bovine serum albumin and human fibrinogen as functions of temperature and γ-irradiation dose have been obtained. The time of dielectric relaxation of water molecules in the protein solutions was calculated. The hydration of the albumin and fibrinogen molecules was determined. The temperature dependencies of hydration are non-monotonous and have a number of characteristic features at the temperatures 30-34 and 44-47 deg. C for serum albumin, and 24 and 32 deg. C for fibrinogen
Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M
2012-07-01
We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. Copyright © 2012 Elsevier Ltd. All rights reserved.
METHODS OF SIMULATION ON THE MAP OF ETHNOGEOGRAPHICAL KNOWLEDGE
Directory of Open Access Journals (Sweden)
A. M. Saraeva
2017-01-01
Full Text Available The article deals with the features of the spatial representation of the location of objects and phenomena on the Earth. One of the types of “cartographic representation” is modeling on the contour map. The advantages of the method are revealed. The application of modeling techniques that allows one to include ethnogeographic data in the content of the characteristics of the territory and reflect them on the contour map. The basis of ethnogeographic modeling is the identification and creation of elements of the material and spiritual culture of peoples by means of conventional signs. Comparison of these elements, their superimposition with respect to each other, as well as their comparison with geographic maps allow us to determine the interrelations and the dependence of the phenomenon. Modeling on contour maps is the basic method of learning in geography. On the one hand, it creates a cartographic image of the studied territory, and on the other hand it facilitates the creation of “visual supports” on the map.Modeling on contour maps, at the beginning students put the basic geographical names, which will serve as the basic knowledge. Then, by purposefully analyzing and comparing the thematic maps of the atlas or textbook, the students reflect specific ethno-geographical knowledge on contour maps. As a result, contour maps acquire “their own face”, and do not become a simple copy of maps of an atlas or textbook.Also, the features of the effect of this technique on the formation of spatial representations about the studied object have been analyzed. Thanks to the cartographic model, one can maintain a constant cognitive interest in the material studied. Modeling on the contour map will allow one to present the structure of the links between the elements of the ethnogeographical material. The basis of ethnogeographic modeling on the contour map is the identification and mapping of elements of the material and spiritual culture of
Conformational study of glyoxal bis(amidinohydrazone) by ab initio methods
Mannfors, B.; Koskinen, J. T.; Pietilä, L.-O.
1997-08-01
We report the first ab initio molecular orbital study on the ground state of the endiamine tautomer of glyoxal bis(amidinohydrazone) (or glyoxal bis(guanylhydrazone), GBG) free base. The calculations were performed at the following levels of theory: Hartree-Fock, second-order Møller-Plesset perturbation theory and density functional theory (B-LYP and B3-LYP) as implemented in the Gaussian 94 software. The standard basis set 6-31G(d) was found to be sufficient. The default fine grid of Gaussian 94 was used in the density functional calculations. Molecular properties, such as optimized structures, total energies and the electrostatic potential derived (CHELPG) atomic charges, were studied as functions of C-C and N-N conformations. The lowest energy conformation was found to be all- trans, in agreement with the experimental solid-state structure. The second conformer with respect to rotation around the central C-C bond was found to be the cis conformer with an MP2//HF energy of 4.67 kcal mol -1. For rotation around the N-N bond the energy increased monotonically from the trans conformation to the cis conformation, the cis energy being very high, 22.01 kcal mol -1 (MP2//HF). The atomic charges were shown to be conformation dependent, and the bond charge increments and especially the conformational changes of the bond charge increments were found to be easily transferable between structurally related systems.
Belaghzal, Houda; Dekker, Job; Gibcus, Johan H
2017-07-01
Chromosome conformation capture-based methods such as Hi-C have become mainstream techniques for the study of the 3D organization of genomes. These methods convert chromatin interactions reflecting topological chromatin structures into digital information (counts of pair-wise interactions). Here, we describe an updated protocol for Hi-C (Hi-C 2.0) that integrates recent improvements into a single protocol for efficient and high-resolution capture of chromatin interactions. This protocol combines chromatin digestion and frequently cutting enzymes to obtain kilobase (kb) resolution. It also includes steps to reduce random ligation and the generation of uninformative molecules, such as unligated ends, to improve the amount of valid intra-chromosomal read pairs. This protocol allows for obtaining information on conformational structures such as compartment and topologically associating domains, as well as high-resolution conformational features such as DNA loops. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Martyna Krejmer-Rabalska
2017-12-01
Full Text Available Baculoviruses have been used as biopesticides for decades. Recently, due to the excessive use of chemical pesticides there is a need for finding new agents that may be useful in biological protection. Sometimes few isolates or species are discovered in one host. In the past few years, many new baculovirus species have been isolated from environmental samples, thoroughly characterized and thanks to next generation sequencing methods their genomes are being deposited in the GenBank database. Next generation sequencing (NGS methodology is the most certain way of detection, but it has many disadvantages. During our studies, we have developed a method based on Polymerase chain reaction (PCR followed by Multitemperature Single Stranded Conformational Polymorphism (MSSCP which allows for distinguishing new granulovirus isolates in only a few hours and at low-cost. On the basis of phylogenetic analysis of betabaculoviruses, representative species have been chosen. The alignment of highly conserved genes—granulin and late expression factor-9, was performed and the degenerate primers were designed to amplify the most variable, short DNA fragments flanked with the most conserved sequences. Afterwards, products of PCR reaction were analysed by MSSCP technique. In our opinion, the proposed method may be used for screening of new isolates derived from environmental samples.
International Nuclear Information System (INIS)
Caudrelier, Jean-Michel; Vial, Stephane; Gibon, David; Kulik, Carine; Fournier, Charles; Castelain, Bernard; Coche-Dequeant, Bernard; Rousseau, Jean
2003-01-01
Purpose: Three-dimensional (3D) volume determination is one of the most important problems in conformal radiation therapy. Techniques of volume determination from tomographic medical imaging are usually based on two-dimensional (2D) contour definition with the result dependent on the segmentation method used, as well as on the user's manual procedure. The goal of this work is to describe and evaluate a new method that reduces the inaccuracies generally observed in the 2D contour definition and 3D volume reconstruction process. Methods and Materials: This new method has been developed by integrating the fuzziness in the 3D volume definition. It first defines semiautomatically a minimal 2D contour on each slice that definitely contains the volume and a maximal 2D contour that definitely does not contain the volume. The fuzziness region in between is processed using possibility functions in possibility theory. A volume of voxels, including the membership degree to the target volume, is then created on each slice axis, taking into account the slice position and slice profile. A resulting fuzzy volume is obtained after data fusion between multiorientation slices. Different studies have been designed to evaluate and compare this new method of target volume reconstruction and a classical reconstruction method. First, target definition accuracy and robustness were studied on phantom targets. Second, intra- and interobserver variations were studied on radiosurgery clinical cases. Results: The absolute volume errors are less than or equal to 1.5% for phantom volumes calculated by the fuzzy logic method, whereas the values obtained with the classical method are much larger than the actual volumes (absolute volume errors up to 72%). With increasing MRI slice thickness (1 mm to 8 mm), the phantom volumes calculated by the classical method are increasing exponentially with a maximum absolute error up to 300%. In contrast, the absolute volume errors are less than 12% for phantom
A Progressive Buffering Method for Road Map Update Using OpenStreetMap Data
Directory of Open Access Journals (Sweden)
Changyong Liu
2015-07-01
Full Text Available Web 2.0 enables a two-way interaction between servers and clients. GPS receivers become available to more citizens and are commonly found in vehicles and smart phones, enabling individuals to record and share their trajectory data on the Internet and edit them online. OpenStreetMap (OSM makes it possible for citizens to contribute to the acquisition of geographic information. This paper studies the use of OSM data to find newly mapped or built roads that do not exist in a reference road map and create its updated version. For this purpose, we propose a progressive buffering method for determining an optimal buffer radius to detect the new roads in the OSM data. In the next step, the detected new roads are merged into the reference road maps geometrically, topologically, and semantically. Experiments with OSM data and reference road maps over an area of 8494 km2 in the city of Wuhan, China and five of its 5 km × 5 km areas are conducted to demonstrate the feasibility and effectiveness of the method. It is shown that the OSM data can add 11.96% or a total of 2008.6 km of new roads to the reference road maps with an average precision of 96.49% and an average recall of 97.63%.
Hackbusch, Sven
This dissertation encompasses work related to synthetic methods for the formation of ester linkages in organic compounds, as well as the investigation of the conformational influence of the ester functional group on the flexibility of inter-saccharide linkages, specifically, and the solution phase structure of ester-containing carbohydrate derivatives, in general. Stereoselective reactions are an important part of the field of asymmetric synthesis and an understanding of their underlying mechanistic principles is essential for rational method development. Here, the exploration of a diastereoselective O-acylation reaction on a trans-2-substituted cyclohexanol scaffold is presented, along with possible reasons for the observed reversal of stereoselectivity dependent on the presence or absence of an achiral amine catalyst. In particular, this work establishes a structure-activity relationship with regard to the trans-2-substituent and its role as a chiral auxiliary in the reversal of diastereoselectivity. In the second part, the synthesis of various ester-linked carbohydrate derivatives, and their conformational analysis is presented. Using multidimensional NMR experiments and computational methods, the compounds' solution-phase structures were established and the effect of the ester functional group on the molecules' flexibility and three-dimensional (3D) structure was investigated and compared to ether or glycosidic linkages. To aid in this, a novel Karplus equation for the C(sp2)OCH angle in ester-linked carbohydrates was developed on the basis of a model ester-linked carbohydrate. This equation describes the sinusoidal relationship between the C(sp2)OCH dihedral angle and the corresponding 3JCH coupling constant that can be determined from a J-HMBC NMR experiment. The insights from this research will be useful in describing the 3D structure of naturally occurring and lab-made ester-linked derivatives of carbohydrates, as well as guiding the de novo-design of
Flood Hazard Mapping by Applying Fuzzy TOPSIS Method
Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.
2017-12-01
There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS
A Hybrid Vision-Map Method for Urban Road Detection
Directory of Open Access Journals (Sweden)
Carlos Fernández
2017-01-01
Full Text Available A hybrid vision-map system is presented to solve the road detection problem in urban scenarios. The standardized use of machine learning techniques in classification problems has been merged with digital navigation map information to increase system robustness. The objective of this paper is to create a new environment perception method to detect the road in urban environments, fusing stereo vision with digital maps by detecting road appearance and road limits such as lane markings or curbs. Deep learning approaches make the system hard-coupled to the training set. Even though our approach is based on machine learning techniques, the features are calculated from different sources (GPS, map, curbs, etc., making our system less dependent on the training set.
Directory of Open Access Journals (Sweden)
Ryszard Gonczarek
2015-01-01
Full Text Available We show that, by applying the conformal transformation method, strongly correlated superconducting systems can be discussed in terms of the Fermi liquid with a variable density of states function. Within this approach, it is possible to formulate and carry out purely analytical study based on a set of fundamental equations. After presenting the mathematical structure of the s-wave superconducting gap and other quantitative characteristics of superconductors, we evaluate and discuss integrals inherent in fundamental equations describing superconducting systems. The results presented here extend the approach formulated by Abrikosov and Maki, which was restricted to the first-order expansion. A few infinite families of integrals are derived and allow us to express the fundamental equations by means of analytical formulas. They can be then exploited in order to find quantitative characteristics of superconducting systems by the method of successive approximations. We show that the results can be applied in studies of high-Tc superconductors and other superconducting materials of the new generation.
Methods for enhancing mapping of thermal fronts in oil recovery
Lee, D.O.; Montoya, P.C.; Wayland, J.R. Jr.
1984-03-30
A method for enhancing the resistivity contrasts of a thermal front in an oil recovery production field as measured by the controlled source audio frequency magnetotelluric (CSAMT) technique is disclosed. This method includes the steps of: (1) preparing a CSAMT-determined topological resistivity map of the production field; (2) introducing a solution of a dopant material into the production field at a concentration effective to alter the resistivity associated with the thermal front; said dopant material having a high cation exchange capacity which might be selected from the group consisting of montmorillonite, illite, and chlorite clays; said material being soluble in the conate water of the production field; (3) preparing a CSAMT-determined topological resistivity map of the production field while said dopant material is moving therethrough; and (4) mathematically comparing the maps from step (1) and step (3) to determine the location of the thermal front. This method is effective with the steam flood, fire flood and water flood techniques.
A new method for mapping perceptual biases across visual space.
Finlayson, Nonie J; Papageorgiou, Andriani; Schwarzkopf, D Samuel
2017-08-01
How we perceive the environment is not stable and seamless. Recent studies found that how a person qualitatively experiences even simple visual stimuli varies dramatically across different locations in the visual field. Here we use a method we developed recently that we call multiple alternatives perceptual search (MAPS) for efficiently mapping such perceptual biases across several locations. This procedure reliably quantifies the spatial pattern of perceptual biases and also of uncertainty and choice. We show that these measurements are strongly correlated with those from traditional psychophysical methods and that exogenous attention can skew biases without affecting overall task performance. Taken together, MAPS is an efficient method to measure how an individual's perceptual experience varies across space.
Techniques and methods to guarantee Bologna-conform higher education in GNSS
Mayer, M.
2012-04-01
The Bologna Declaration is aiming for student-centered, outcome-related, and competence-based teaching. In order to fulfill these demands, deep level learning techniques should be used to meet the needs of adult-compatible and self-determined learning. The presentation will summarize selected case studies carried out in the framework of the lecture course "Introduction into GNSS positioning" of the Geodetic Institute of the Karlsruhe Institute of Technology (Karlsruhe, Germany). The lecture course "Introduction into GNSS positioning" is a compulsory part of the Bachelor study course "Geodesy and Geoinformatics" and also a supplementary module of the Bachelor study course "Geophysics". Within the lecture course, basic knowledge and basic principles of Global Navigation Satellite Systems, like GPS, are imparted. The lecture course was migrated starting from a classically designed geodetic lecture course, which consisted of a well-adapted combination of teacher-centered classroom lectures and practical training (e.g., field exercises). The recent Bologna-conform blended learning concepts supports and motivates students to learn more sustainable using online and classroom learning methods. Therefore, an appropriate combination of - classroom lectures: Students and teacher give lectures - practical training: Students select topics individually - online learning: ILIAS (learning management system) is used as data, result, and communication platform. The framing didactical method is based on the so-called anchored instruction approach. Within this approach, an up-to-date scientific GNSS-related paper dealing with the large-scale geodetic project "Fehmarn Belt Fixed Link" is used as anchor. The students have to read the paper individually in the beginning of the semester. This enables them to realize a lot of not-known GNSS-related facts. Therefore, questions can be formulated. The lecture course deals with these questions, in order to answer them. At the end of the
Maadooliat, Mehdi; Gao, Xin; Huang, Jianhua Z.
2012-01-01
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
Maadooliat, Mehdi
2012-08-27
Despite considerable progress in the past decades, protein structure prediction remains one of the major unsolved problems in computational biology. Angular-sampling-based methods have been extensively studied recently due to their ability to capture the continuous conformational space of protein structures. The literature has focused on using a variety of parametric models of the sequential dependencies between angle pairs along the protein chains. In this article, we present a thorough review of angular-sampling-based methods by assessing three main questions: What is the best distribution type to model the protein angles? What is a reasonable number of components in a mixture model that should be considered to accurately parameterize the joint distribution of the angles? and What is the order of the local sequence-structure dependency that should be considered by a prediction method? We assess the model fits for different methods using bivariate lag-distributions of the dihedral/planar angles. Moreover, the main information across the lags can be extracted using a technique called Lag singular value decomposition (LagSVD), which considers the joint distribution of the dihedral/planar angles over different lags using a nonparametric approach and monitors the behavior of the lag-distribution of the angles using singular value decomposition. As a result, we developed graphical tools and numerical measurements to compare and evaluate the performance of different model fits. Furthermore, we developed a web-tool (http://www.stat.tamu. edu/~madoliat/LagSVD) that can be used to produce informative animations. © The Author 2012. Published by Oxford University Press.
Directory of Open Access Journals (Sweden)
Mingzhong Gao
Full Text Available Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method's validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure.
Optical and Physical Methods for Mapping Flooding with Satellite Imagery
Fayne, Jessica Fayne; Bolten, John; Lakshmi, Venkat; Ahamed, Aakash
2016-01-01
Flood and surface water mapping is becoming increasingly necessary, as extreme flooding events worldwide can damage crop yields and contribute to billions of dollars economic damages as well as social effects including fatalities and destroyed communities (Xaio et al. 2004; Kwak et al. 2015; Mueller et al. 2016).Utilizing earth observing satellite data to map standing water from space is indispensable to flood mapping for disaster response, mitigation, prevention, and warning (McFeeters 1996; Brakenridge and Anderson 2006). Since the early 1970s(Landsat, USGS 2013), researchers have been able to remotely sense surface processes such as extreme flood events to help offset some of these problems. Researchers have demonstrated countless methods and modifications of those methods to help increase knowledge of areas at risk and areas that are flooded using remote sensing data from optical and radar systems, as well as free publically available and costly commercial datasets.
Santos Basin Geological Structures Mapped by Cross-gradient Method
Jilinski, P.; Fontes, S. L.
2011-12-01
Introduction We mapped regional-scale geological structures localized in offshore zone Santos Basin, South-East Brazilian Coast. The region is dominated by transition zone from oceanic to continental crust. Our objective was to determine the imprint of deeper crustal structures from correlation between bathymetric, gravity and magnetic anomaly maps. The region is extensively studied for oil and gas deposits including large tectonic sub-salt traps. Our method is based on gradient directions and their magnitudes product. We calculate angular differences and cross-product and access correlation between properties and map structures. Theory and Method We used angular differences and cross-product to determine correlated region between bathymetric, free-air gravity and magnetic anomaly maps. This gradient based method focuses on borders of anomalies and uses its morphological properties to access correlation between their sources. We generated maps of angles and cross-product distribution to locate correlated regions. Regional scale potential fields maps of FA and MA are a reflection of the overlaying and overlapping effects of the adjacent structures. Our interest was in quantifying and characterizing the relation between shapes of magnetic anomalies and gravity anomalies. Results Resulting maps show strong correlation between bathymetry and gravity anomaly and bathymetry and magnetic anomaly for large strictures including Serra do Mar, shelf, continental slope and rise. All maps display the regional dominance of NE-SW geological structures alignment parallel to the shore. Special interest is presented by structures transgressing this tendency. Magnetic, gravity anomaly and bathymetry angles map show large correlated region over the shelf zone and smaller scale NE-SW banded structures over abyssal plane. From our interpretation the large band of inverse correlation adjacent to the shore is generated by the gravity effect of Serra do Mar. Disrupting structures including
Mapped Chebyshev Pseudo-Spectral Method for Dynamic Aero-Elastic Problem of Limit Cycle Oscillation
Im, Dong Kyun; Kim, Hyun Soon; Choi, Seongim
2018-05-01
A mapped Chebyshev pseudo-spectral method is developed as one of the Fourier-spectral approaches and solves nonlinear PDE systems for unsteady flows and dynamic aero-elastic problem in a given time interval, where the flows or elastic motions can be periodic, nonperiodic, or periodic with an unknown frequency. The method uses the Chebyshev polynomials of the first kind for the basis function and redistributes the standard Chebyshev-Gauss-Lobatto collocation points more evenly by a conformal mapping function for improved numerical stability. Contributions of the method are several. It can be an order of magnitude more efficient than the conventional finite difference-based, time-accurate computation, depending on the complexity of solutions and the number of collocation points. The method reformulates the dynamic aero-elastic problem in spectral form for coupled analysis of aerodynamics and structures, which can be effective for design optimization of unsteady and dynamic problems. A limit cycle oscillation (LCO) is chosen for the validation and a new method to determine the LCO frequency is introduced based on the minimization of a second derivative of the aero-elastic formulation. Two examples of the limit cycle oscillation are tested: nonlinear, one degree-of-freedom mass-spring-damper system and two degrees-of-freedom oscillating airfoil under pitch and plunge motions. Results show good agreements with those of the conventional time-accurate simulations and wind tunnel experiments.
A Karnaugh-Map based fingerprint minutiae extraction method
Directory of Open Access Journals (Sweden)
Sunil Kumar Singla
2010-07-01
Full Text Available Fingerprint is one of the most promising method among all the biometric techniques and has been used for thepersonal authentication for a long time because of its wide acceptance and reliability. Features (Minutiae are extracted fromthe fingerprint in question and are compared with the features already stored in the database for authentication. Crossingnumber (CN is the most commonly used minutiae extraction method for fingerprints. In this paper, a new Karnaugh-Mapbased fingerprint minutiae extraction method has been proposed and discussed. In the proposed algorithm the 8 neighborsof a pixel in a 33 window are arranged as 8 bits of a byte and corresponding hexadecimal (hex value is calculated. Thesehex values are simplified using standard Karnaugh-Map (K-map technique to obtain the minimized logical expression.Experiments conducted on the FVC2002/Db1_a database reveals that the developed method is better than the crossingnumber (CN method.
Progress in Geo-Electrical Methods for Hydrogeological Mapping?
DEFF Research Database (Denmark)
Schrøder, Niels
2014-01-01
In most of the 20th century the geo-electrical methods were primarily used for groundwater exploration and the application of the methods were normally followed by a borehole, and a moment of truth. In this process the use of DC (direct current) soundings have been developed to a high grade...... of excellence. In the last 25 years the geo-electrical methods are more used in connection with groundwater protection and planning, and new methods, as transient electromagnetic (TEM) soundings, have been developed that provide more measurements per hour. In Denmark this change is very explicit, and a paper....... The test area was earlier mapped by DC-soundings, so it is possible to test the methods against each other. It is concluded that well performed DC-soundings with a Schlumberger configuration still provide the best base for hydrogeological mapping...
Poltev, V I; Anisimov, V M; Sanchez, C; Deriabina, A; Gonzalez, E; Garcia, D; Rivas, F; Polteva, N A
2016-01-01
It is generally accepted that the important characteristic features of the Watson-Crick duplex originate from the molecular structure of its subunits. However, it still remains to elucidate what properties of each subunit are responsible for the significant characteristic features of the DNA structure. The computations of desoxydinucleoside monophosphates complexes with Na-ions using density functional theory revealed a pivotal role of DNA conformational properties of single-chain minimal fragments in the development of unique features of the Watson-Crick duplex. We found that directionality of the sugar-phosphate backbone and the preferable ranges of its torsion angles, combined with the difference between purines and pyrimidines. in ring bases, define the dependence of three-dimensional structure of the Watson-Crick duplex on nucleotide base sequence. In this work, we extended these density functional theory computations to the minimal' fragments of DNA duplex, complementary desoxydinucleoside monophosphates complexes with Na-ions. Using several computational methods and various functionals, we performed a search for energy minima of BI-conformation for complementary desoxydinucleoside monophosphates complexes with different nucleoside sequences. Two sequences are optimized using ab initio method at the MP2/6-31++G** level of theory. The analysis of torsion angles, sugar ring puckering and mutual base positions of optimized structures demonstrates that the conformational characteristic features of complementary desoxydinucleoside monophosphates complexes with Na-ions remain within BI ranges and become closer to the corresponding characteristic features of the Watson-Crick duplex crystals. Qualitatively, the main characteristic features of each studied complementary desoxydinucleoside monophosphates complex remain invariant when different computational methods are used, although the quantitative values of some conformational parameters could vary lying within the
A comparison of multidimensional scaling methods for perceptual mapping
Bijmolt, T.H.A.; Wedel, M.
Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare
Building maps to search the web: the method Sewcom
Directory of Open Access Journals (Sweden)
Corrado Petrucco
2002-01-01
Full Text Available Seeking information on the Internet is becoming a necessity 'at school, at work and in every social sphere. Unfortunately the difficulties' inherent in the use of search engines and the use of unconscious cognitive approaches inefficient limit their effectiveness. It is in this respect presented a method, called SEWCOM that lets you create conceptual maps through interaction with search engines.
Food Mapping: A Psychogeographical Method for Raising Food Consciousness
Wight, R. Alan; Killham, Jennifer
2014-01-01
Food mapping is a new, participatory, interdisciplinary pedagogical approach to learning about our modern food systems. This method is inspired by the Situationist International's practice of the "dérive" and draws from the discourses of critical geography, the food movement's research on food deserts, and participatory action…
Methodical Aspects of Applying Strategy Map in an Organization
Directory of Open Access Journals (Sweden)
Piotr Markiewicz
2013-06-01
Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.
International Nuclear Information System (INIS)
Gao, Hao
2016-01-01
For the treatment planning during intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT), beam fluence maps can be first optimized via fluence map optimization (FMO) under the given dose prescriptions and constraints to conformally deliver the radiation dose to the targets while sparing the organs-at-risk, and then segmented into deliverable MLC apertures via leaf or arc sequencing algorithms. This work is to develop an efficient algorithm for FMO based on alternating direction method of multipliers (ADMM). Here we consider FMO with the least-square cost function and non-negative fluence constraints, and its solution algorithm is based on ADMM, which is efficient and simple-to-implement. In addition, an empirical method for optimizing the ADMM parameter is developed to improve the robustness of the ADMM algorithm. The ADMM based FMO solver was benchmarked with the quadratic programming method based on the interior-point (IP) method using the CORT dataset. The comparison results suggested the ADMM solver had a similar plan quality with slightly smaller total objective function value than IP. A simple-to-implement ADMM based FMO solver with empirical parameter optimization is proposed for IMRT or VMAT. (paper)
A finite integration method for conformal, structured-grid, electromagnetic simulation
International Nuclear Information System (INIS)
Cooke, S.J.; Shtokhamer, R.; Mondelli, A.A.; Levush, B.
2006-01-01
We describe a numerical scheme for solving Maxwell's equations in the frequency domain on a conformal, structured, non-orthogonal, multi-block mesh. By considering Maxwell's equations in a volume parameterized by dimensionless curvilinear coordinates, we obtain a set of tensor equations that are a continuum analogue of common circuit equations, and that separate the metrical and metric-free parts of Maxwell's equations and the material constitutive relations. We discretize these equations using a new formulation that treats the electric field and magnetic induction using simple basis-function representations to obtain a discrete form of Faraday's law of induction, but that uses finite integral representations for the displacement current and magnetic field to obtain a discrete form of Ampere's law, as in the finite integration technique [T. Weiland, A discretization method for the solution of Maxwell's equations for six-component fields, Electron. Commun. (AE U) 31 (1977) 116; T. Weiland, Time domain electromagnetic field computation with finite difference methods, Int. J. Numer. Model: Electron. Netw. Dev. Field 9 (1996) 295-319]. We thereby derive new projection operators for the discrete tensor material equations and obtain a compact numerical scheme for the discrete differential operators. This scheme is shown to exhibit significantly reduced numerical dispersion when compared to the standard linear finite element method. We take advantage of the mesh structure on a block-by-block basis to implement these numerical operators efficiently, and achieve computational speed with modest memory requirements when compared to explicit sparse matrix storage. Using the Jacobi-Davidson [G.L.G. Sleijpen, H.A. van der Vorst, A Jacobi-Davidson iteration method for linear eigenvalue problems, SIAM J. Matrix Anal. Appl. 17 (2) (1996) 401-425; S.J. Cooke, B. Levush, Eigenmode solution of 2-D and 3-D electromagnetic cavities containing absorbing materials using the Jacobi
A taxonomy of behaviour change methods: an Intervention Mapping approach
Kok, Gerjo; Gottlieb, Nell H.; Peters, Gjalt-Jorn Y.; Mullen, Patricia Dolan; Parcel, Guy S.; Ruiter, Robert A.C.; Fern?ndez, Mar?a E.; Markham, Christine; Bartholomew, L. Kay
2015-01-01
ABSTRACT In this paper, we introduce the Intervention Mapping (IM) taxonomy of behaviour change methods and its potential to be developed into a coding taxonomy. That is, although IM and its taxonomy of behaviour change methods are not in fact new, because IM was originally developed as a tool for intervention development, this potential was not immediately apparent. Second, in explaining the IM taxonomy and defining the relevant constructs, we call attention to the existence of parameters fo...
Methodical bases of perceptual mapping of printing industry companies
Directory of Open Access Journals (Sweden)
Kalinin Pavel
2017-01-01
Full Text Available This is to study the methodological foundations of perceptual mapping in printing industry enterprises. This research has a practice focus which affects the choice of its methodological framework. The authors use such scientific research as analysis of cause-effect relationships, synthesis, problem analysis, expert evaluation and image visualization methods. In this paper, the authors present their assessment of the competitive environment of major printing industry companies in Kirov oblast; their assessment employs perceptual mapping enables by Minitab 14. This technique can be used by experts in the field of marketing and branding to assess the competitive environment in any market. The object of research is printing industry in Kirov oblast. The most important conclusion of this study is that in perceptual mapping, all the parameters are integrated in a single system and provide a more objective view of the company’s market situation.
Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng
2016-01-01
To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more
International Nuclear Information System (INIS)
Hannam, Mark D.; Evans, Charles R.; Cook, Gregory B.; Baumgarte, Thomas W.
2003-01-01
We consider combining two important methods for constructing quasiequilibrium initial data for binary black holes: the conformal thin-sandwich formalism and the puncture method. The former seeks to enforce stationarity in the conformal three-metric and the latter attempts to avoid internal boundaries, like minimal surfaces or apparent horizons. We show that these two methods make partially conflicting requirements on the boundary conditions that determine the time slices. In particular, it does not seem possible to construct slices that are quasistationary and that avoid physical singularities while simultaneously are connected by an everywhere positive lapse function, a condition which must be obtained if internal boundaries are to be avoided. Some relaxation of these conflicting requirements may yield a soluble system, but some of the advantages that were sought in combining these approaches will be lost
Adaptive multiresolution method for MAP reconstruction in electron tomography
Energy Technology Data Exchange (ETDEWEB)
Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)
2016-11-15
3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.
Acoustic methods for cavitation mapping in biomedical applications
Wan, M.; Xu, S.; Ding, T.; Hu, H.; Liu, R.; Bai, C.; Lu, S.
2015-12-01
In recent years, cavitation is increasingly utilized in a wide range of applications in biomedical field. Monitoring the spatial-temporal evolution of cavitation bubbles is of great significance for efficiency and safety in biomedical applications. In this paper, several acoustic methods for cavitation mapping proposed or modified on the basis of existing work will be presented. The proposed novel ultrasound line-by-line/plane-by-plane method can depict cavitation bubbles distribution with high spatial and temporal resolution and may be developed as a potential standard 2D/3D cavitation field mapping method. The modified ultrafast active cavitation mapping based upon plane wave transmission and reception as well as bubble wavelet and pulse inversion technique can apparently enhance the cavitation to tissue ratio in tissue and further assist in monitoring the cavitation mediated therapy with good spatial and temporal resolution. The methods presented in this paper will be a foundation to promote the research and development of cavitation imaging in non-transparent medium.
Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method
Directory of Open Access Journals (Sweden)
Yuchen Guo
2018-03-01
Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.
Apparatus and method for mapping an area of interest
Staab, Torsten A. Cohen, Daniel L.; Feller, Samuel [Fairfax, VA
2009-12-01
An apparatus and method are provided for mapping an area of interest using polar coordinates or Cartesian coordinates. The apparatus includes a range finder, an azimuth angle measuring device to provide a heading and an inclinometer to provide an angle of inclination of the range finder as it relates to primary reference points and points of interest. A computer is provided to receive signals from the range finder, inclinometer and azimuth angle measurer to record location data and calculate relative locations between one or more points of interest and one or more primary reference points. The method includes mapping of an area of interest to locate points of interest relative to one or more primary reference points and to store the information in the desired manner. The device may optionally also include an illuminator which can be utilized to paint the area of interest to indicate both points of interest and primary points of reference during and/or after data acquisition.
Interferometric methods for mapping static electric and magnetic fields
DEFF Research Database (Denmark)
Pozzi, Giulio; Beleggia, Marco; Kasama, Takeshi
2014-01-01
The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensi......) the model-independent determination of the locations and magnitudes of field sources (electric charges and magnetic dipoles) directly from electron holographic data.......The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensity...... on theoretical models that form the basis of the quantitative interpretation of electron holographic data. We review the application of electron holography to a variety of samples (including electric fields associated with p–n junctions in semiconductors, quantized magnetic flux in superconductors...
The higher order flux mapping method in large size PHWRs
International Nuclear Information System (INIS)
Kulkarni, A.K.; Balaraman, V.; Purandare, H.D.
1997-01-01
A new higher order method is proposed for obtaining flux map using single set of expansion mode. In this procedure, one can make use of the difference between predicted value of detector reading and their actual values for determining the strength of local fluxes around detector site. The local fluxes are arising due to constant perturbation changes (both extrinsic and intrinsic) taking place in the reactor. (author)
Arjunan, V; Rani, T; Santhanam, R; Mohan, S
2012-10-01
The FT-IR and FT-Raman spectra of H bond inner conformer of 2,3-epoxypropanol have been recorded in the regions 3700-400 and 3700-100 cm(-1), respectively. The spectra were interpreted in terms of fundamentals modes, combination and overtone bands. The normal coordinate analysis was carried out to confirm the precision of the assignments. The structure of the conformers H bond inner and H bond outer1 were optimised and the structural characteristics were determined by density functional theory (DFT) using B3LYP and MP2 methods with 6-31G** and 6-311++G** basis sets. The vibrational frequencies were calculated in all these methods and were compared with the experimental frequencies which yield good agreement between observed and calculated frequencies. The electronic properties HOMO and LUMO energies were measured by time-dependent TD-DFT approach. Copyright © 2012 Elsevier B.V. All rights reserved.
Friedrich, Lucas
2017-12-29
This work presents an entropy stable discontinuous Galerkin (DG) spectral element approximation for systems of non-linear conservation laws with general geometric (h) and polynomial order (p) non-conforming rectangular meshes. The crux of the proofs presented is that the nodal DG method is constructed with the collocated Legendre-Gauss-Lobatto nodes. This choice ensures that the derivative/mass matrix pair is a summation-by-parts (SBP) operator such that entropy stability proofs from the continuous analysis are discretely mimicked. Special attention is given to the coupling between nonconforming elements as we demonstrate that the standard mortar approach for DG methods does not guarantee entropy stability for non-linear problems, which can lead to instabilities. As such, we describe a precise procedure and modify the mortar method to guarantee entropy stability for general non-linear hyperbolic systems on h/p non-conforming meshes. We verify the high-order accuracy and the entropy conservation/stability of fully non-conforming approximation with numerical examples.
DEFF Research Database (Denmark)
Rist, Wolfgang; Jørgensen, Thomas J D; Roepstorff, Peter
2003-01-01
Stress conditions such as heat shock alter the transcriptional profile in all organisms. In Escherichia coli the heat shock transcription factor, sigma 32, out-competes upon temperature up-shift the housekeeping sigma-factor, sigma 70, for binding to core RNA polymerase and initiates heat shock...... gene transcription. To investigate possible heat-induced conformational changes in sigma 32 we performed amide hydrogen (H/D) exchange experiments under optimal growth and heat shock conditions combined with mass spectrometry. We found a rapid exchange of around 220 of the 294 amide hydrogens at 37...... degrees C, indicating that sigma 32 adopts a highly flexible structure. At 42 degrees C we observed a slow correlated exchange of 30 additional amide hydrogens and localized it to a helix-loop-helix motif within domain sigma 2 that is responsible for the recognition of the -10 region in heat shock...
Commutators method for boson mapping in the seniority scheme
International Nuclear Information System (INIS)
Bonatsos, D.; Klein, A.; Ching-Teh Li
1984-01-01
A new approximate method for carrying out the boson mapping in the seniority scheme is described, in which the boson expansions of the pair and multipole operators are determined by satisfying the commutation relations for the associated Lie algebra. The method is illustrated for the single-j shell-model algebra SO(2(2j + 1)). The calculation is successively carried out to lowest and to next-higher order, the latter exhibiting the necessity of including g-bosons in the calculation in order to reach algebraic consistency. Agreement with the exact result of Ginocchio for j = 3/2 is established to the order considered. (orig.)
A non conforming finite element method for computing eigenmodes of resonant cavities
International Nuclear Information System (INIS)
Touze, F.; Le Meur, G.
1990-06-01
We present here a non conforming finite element in R 3 . This finite element, built on tetrahedrons, is particularly suited for computing eigenmodes. The main advantage of this element is that it preserves some structural properties of the space in which the solutions of the Maxwell's equations are to be found. Numerical results are presented for both two-dimensional and three-dimensional cases
Global Seabed Materials and Habitats Mapped: The Computational Methods
Jenkins, C. J.
2016-02-01
What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.
Contribution mapping: a method for mapping the contribution of research to enhance its impact
2012-01-01
Background At a time of growing emphasis on both the use of research and accountability, it is important for research funders, researchers and other stakeholders to monitor and evaluate the extent to which research contributes to better action for health, and find ways to enhance the likelihood that beneficial contributions are realized. Past attempts to assess research 'impact' struggle with operationalizing 'impact', identifying the users of research and attributing impact to research projects as source. In this article we describe Contribution Mapping, a novel approach to research monitoring and evaluation that aims to assess contributions instead of impacts. The approach focuses on processes and actors and systematically assesses anticipatory efforts that aim to enhance contributions, so-called alignment efforts. The approach is designed to be useful for both accountability purposes and for assisting in better employing research to contribute to better action for health. Methods Contribution Mapping is inspired by a perspective from social studies of science on how research and knowledge utilization processes evolve. For each research project that is assessed, a three-phase process map is developed that includes the main actors, activities and alignment efforts during research formulation, production and knowledge extension (e.g. dissemination and utilization). The approach focuses on the actors involved in, or interacting with, a research project (the linked actors) and the most likely influential users, who are referred to as potential key users. In the first stage, the investigators of the assessed project are interviewed to develop a preliminary version of the process map and first estimation of research-related contributions. In the second stage, potential key-users and other informants are interviewed to trace, explore and triangulate possible contributions. In the third stage, the presence and role of alignment efforts is analyzed and the preliminary
International Nuclear Information System (INIS)
Lee, Dong Soo; Lee, Jae Sung; Kim, Kyeong Min; Chung, June Key; Lee, Myung Chul
1998-01-01
We investigated the statistical methods to compose the functional brain map of human working memory and the principal factors that have an effect on the methods for localization. Repeated PET scans with successive four tasks, which consist of one control and three different activation tasks, were performed on six right-handed normal volunteers for 2 minutes after bolus injections of 925 MBq H 2 15 O at the intervals of 30 minutes. Image data were analyzed using SPM96 (Statistical Parametric Mapping) implemented with Matlab (Mathworks Inc., U.S.A.). Images from the same subject were spatially registered and were normalized using linear and nonlinear transformation methods. Significant difference between control and each activation state was estimated at every voxel based on the general linear model. Differences of global counts were removed using analysis of covariance (ANCOVA) with global activity as covariate. Using the mean and variance for each condition which was adjusted using ANCOVA, t-statistics was performed on every voxel. To interpret the results more easily, t-values were transformed to the standard Gaussian distribution (Z-score). All the subjects carried out the activation and control tests successfully. Average rate of correct answers was 95%. The numbers of activated blobs were 4 for verbal memory I, 9 for verbal memory II, 9 for visual memory, and 6 for conjunctive activation of these three tasks. The verbal working memory activates predominantly left-sided structures, and the visual memory activates the right hemisphere. We conclude that rCBF PET imaging and statistical parametric mapping method were useful in the localization of the brain regions for verbal and visual working memory
Directory of Open Access Journals (Sweden)
Natalia N. Gorinchoy
2012-06-01
Full Text Available The electron-conformational (EC method is employed for the toxicophore (Tph identification and quantitative prediction of toxicity using the training set of 24 compounds that are considered as fragrance allergens. The values of a=LD50 in oral exposure of rats were chosen as a measure of toxicity. EC parameters are evaluated on the base of conformational analysis and ab initio electronic structure calculations (including solvent influence. The Tph consists of four sites which in this series of compounds are represented by three carbon and one oxygen atoms, but may be any other atoms that have the same electronic and geometric features within the tolerance limits. The regression model taking into consideration the Tph flexibility, anti-Tph shielding, and influence of out-of-Tph functional groups predicts well the experimental values of toxicity (R2 = 0.93 with a reasonable leaveone- out cross-validation.
Medina-Cucurella, Angélica V; Zhu, Yaqi; Bowen, Scott J; Bergeron, Lisa M; Whitehead, Timothy A
2018-04-12
Nerve growth factor (NGF) plays a central role in multiple chronic pain conditions. As such, anti-NGF monoclonal antibodies (mAbs) that function by antagonizing NGF downstream signaling are leading drug candidates for non-opioid pain relief. To evaluate anti-canine NGF (cNGF) mAbs we sought a yeast surface display platform of cNGF. Both mature cNGF and pro-cNGF displayed on the yeast surface but bound conformationally sensitive mAbs at most 2.5-fold in mean fluorescence intensity above background, suggesting that cNGF was mostly misfolded. To improve the amount of folded, displayed cNGF, we used comprehensive mutagenesis, FACS, and deep sequencing to identify point mutants in the pro-region of canine NGF that properly enhance the folded protein displayed on the yeast surface. Out of 1,737 tested single point mutants in the pro region, 49 increased the amount of NGF recognized by conformationally sensitive mAbs. These gain-of-function mutations cluster around residues A-61-P-26. Gain-of-function mutants were additive, and a construct containing three mutations increased amount of folded cNGF to 23- fold above background. Using this new cNGF construct, fine conformational epitopes for tanezumab and three anti-cNGF mAbs were evaluated. The epitope revealed by the yeast experiments largely overlapped with the tanezumab epitope previously determined by X-ray crystallography. The other mAbs showed site-specific differences with tanezumab. As the number of binding epitopes of functionally neutralizing anti-NGF mAbs on NGF are limited, subtle differences in the individual interacting residues on NGF that bind each mAb contribute to the understanding of each antibody and variations in its neutralizing activity. These results demonstrate the potential of deep sequencing-guided protein engineering to improve the production of folded surface-displayed protein, and the resulting cNGF construct provides a platform to map conformational epitopes for other anti-neurotrophin m
A method for statistically comparing spatial distribution maps
Directory of Open Access Journals (Sweden)
Reynolds Mary G
2009-01-01
Full Text Available Abstract Background Ecological niche modeling is a method for estimation of species distributions based on certain ecological parameters. Thus far, empirical determination of significant differences between independently generated distribution maps for a single species (maps which are created through equivalent processes, but with different ecological input parameters, has been challenging. Results We describe a method for comparing model outcomes, which allows a statistical evaluation of whether the strength of prediction and breadth of predicted areas is measurably different between projected distributions. To create ecological niche models for statistical comparison, we utilized GARP (Genetic Algorithm for Rule-Set Production software to generate ecological niche models of human monkeypox in Africa. We created several models, keeping constant the case location input records for each model but varying the ecological input data. In order to assess the relative importance of each ecological parameter included in the development of the individual predicted distributions, we performed pixel-to-pixel comparisons between model outcomes and calculated the mean difference in pixel scores. We used a two sample Student's t-test, (assuming as null hypothesis that both maps were identical to each other regardless of which input parameters were used to examine whether the mean difference in corresponding pixel scores from one map to another was greater than would be expected by chance alone. We also utilized weighted kappa statistics, frequency distributions, and percent difference to look at the disparities in pixel scores. Multiple independent statistical tests indicated precipitation as the single most important independent ecological parameter in the niche model for human monkeypox disease. Conclusion In addition to improving our understanding of the natural factors influencing the distribution of human monkeypox disease, such pixel-to-pixel comparison
Validity of the CT to attenuation coefficient map conversion methods
International Nuclear Information System (INIS)
Faghihi, R.; Ahangari Shahdehi, R.; Fazilat Moadeli, M.
2004-01-01
The most important commercialized methods of attenuation correction in SPECT are based on attenuation coefficient map from a transmission imaging method. The transmission imaging system can be the linear source of radioelement or a X-ray CT system. The image of transmission imaging system is not useful unless to replacement of the attenuation coefficient or CT number with the attenuation coefficient in SPECT energy. In this paper we essay to evaluate the validity and estimate the error of the most used method of this transformation. The final result shows that the methods which use a linear or multi-linear curve accept a error in their estimation. The value of mA is not important but the patient thickness is very important and it can introduce a error more than 10 percent in the final result
Liu, Fan; Abrol, Ravinder; Goddard, William, III; Dougherty, Dennis
2014-03-01
Entropic effect in GPCR activation is poorly understood. Based on the recent solved structures, researchers in the GPCR structural biology field have proposed several ``local activating switches'' that consisted of a few number of conserved residues, but have long ignored the collective dynamical effect (conformational entropy) of a domain comprised of an ensemble of residues. A new paradigm has been proposed recently that a GPCR can be viewed as a composition of several functional coupling domains, each of which undergoes order-to-disorder or disorder-to-order transitions upon activation. Here we identified and studied these functional coupling domains by comparing the local entropy changes of each residue between the inactive and active states of the β2 adrenergic receptor from computational simulation. We found that agonist and G-protein binding increases the heterogeneity of the entropy distribution in the receptor. This new activation paradigm and computational entropy analysis scheme provides novel ways to design functionally modified mutant and identify new allosteric sites for GPCRs. The authors thank NIH and Sanofi for funding this project.
Banner, David W; Gsell, Bernard; Benz, Jörg; Bertschinger, Julian; Burger, Dominique; Brack, Simon; Cuppuleri, Simon; Debulpaep, Maja; Gast, Alain; Grabulovski, Dragan; Hennig, Michael; Hilpert, Hans; Huber, Walter; Kuglstatter, Andreas; Kusznir, Eric; Laeremans, Toon; Matile, Hugues; Miscenic, Christian; Rufer, Arne C; Schlatter, Daniel; Steyaert, Jan; Stihle, Martine; Thoma, Ralf; Weber, Martin; Ruf, Armin
2013-06-01
The aspartic protease BACE2 is responsible for the shedding of the transmembrane protein Tmem27 from the surface of pancreatic β-cells, which leads to inactivation of the β-cell proliferating activity of Tmem27. This role of BACE2 in the control of β-cell maintenance suggests BACE2 as a drug target for diabetes. Inhibition of BACE2 has recently been shown to lead to improved control of glucose homeostasis and to increased insulin levels in insulin-resistant mice. BACE2 has 52% sequence identity to the well studied Alzheimer's disease target enzyme β-secretase (BACE1). High-resolution BACE2 structures would contribute significantly to the investigation of this enzyme as either a drug target or anti-target. Surface mutagenesis, BACE2-binding antibody Fab fragments, single-domain camelid antibody VHH fragments (Xaperones) and Fyn-kinase-derived SH3 domains (Fynomers) were used as crystallization helpers to obtain the first high-resolution structures of BACE2. Eight crystal structures in six different packing environments define an ensemble of low-energy conformations available to the enzyme. Here, the different strategies used for raising and selecting BACE2 binders for cocrystallization are described and the crystallization success, crystal quality and the time and resources needed to obtain suitable crystals are compared.
Kishimoto, Naoki; Waizumi, Hiroki
2017-10-01
Stable conformers of L-cysteine and L,L-cystine were explored using an automated and efficient conformational searching method. The Gibbs energies of the stable conformers of L-cysteine and L,L-cystine were calculated with G4 and MP2 methods, respectively, at 450, 298.15, and 150 K. By assuming thermodynamic equilibrium and the barrier energies for the conformational isomerization pathways, the estimated ratios of the stable conformers of L-cysteine were compared with those determined by microwave spectroscopy in a previous study. Equilibrium structures of 1:1 and 2:1 cystine-Fe complexes were also calculated, and the energy of insertion of Fe into the disulfide bond was obtained.
Mapping enzymatic catalysis using the effective fragment molecular orbital method
DEFF Research Database (Denmark)
Svendsen, Casper Steinmann; Fedorov, Dmitri G.; Jensen, Jan Halborg
2013-01-01
We extend the Effective Fragment Molecular Orbital (EFMO) method to the frozen domain approach where only the geometry of an active part is optimized, while the many-body polarization effects are considered for the whole system. The new approach efficiently mapped out the entire reaction path...... of chorismate mutase in less than four days using 80 cores on 20 nodes, where the whole system containing 2398 atoms is treated in the ab initio fashion without using any force fields. The reaction path is constructed automatically with the only assumption of defining the reaction coordinate a priori. We...
Adopting of Agile methods in Software Development Organizations: Systematic Mapping
Directory of Open Access Journals (Sweden)
Samia Abdalhamid
2017-11-01
Full Text Available Adoption of agile methods in the software development organization is considered as a powerful solution to deal with the quickly changing and regularly developing business environment and fully-educated customers with constantly rising expectation, such as shorter time periods and an extraordinary level of response and service. This study investigates the adoption of agile approaches in software development organizations by using systematic mapping. Six research questions are identified, and to answer these questions a number of research papers have been reviewed in electronic databases. Finally, 25 research papers are examined and answers to all research questions are provided.
Tuzun, Burak; Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin
2018-05-13
In the present work, pharmacophore identification and biological activity prediction for 86 pyrazole pyridine carboxylic acid derivatives were made using the electron conformational genetic algorithm approach which was introduced as a 4D-QSAR analysis by us in recent years. In the light of the data obtained from quantum chemical calculations at HF/6-311 G** level, the electron conformational matrices of congruity (ECMC) were constructed by EMRE software. Comparing the matrices, electron conformational submatrix of activity (ECSA, Pha) was revealed that are common for these compounds within a minimum tolerance. A parameter pool was generated considering the obtained pharmacophore. To determine the theoretical biological activity of molecules and identify the best subset of variables affecting bioactivities, we used the nonlinear least square regression method and genetic algorithm. The results obtained in this study are in good agreement with the experimental data presented in the literature. The model for training and test sets attained by the optimum 12 parameters gave highly satisfactory results with R2training= 0.889, q2=0.839 and SEtraining=0.066, q2ext1 = 0.770, q2ext2 = 0.750, q2ext3=0.824, ccctr = 0.941, ccctest = 0.869 and cccall = 0.927. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Method of solving conformal models in D-dimensional space I
International Nuclear Information System (INIS)
Fradkin, E.S.; Palchik, M.Y.
1996-01-01
We study the Hilbert space of conformal field theory in D-dimensional space. The latter is shown to have model-independent structure. The states of matter fields and gauge fields form orthogonal subspaces. The dynamical principle fixing the choice of model may be formulated either in each of these subspaces or in their direct sum. In the latter case, gauge interactions are necessarily present in the model. We formulate the conditions specifying the class of models where gauge interactions are being neglected. The anomalous Ward identities are derived. Different values of anomalous parameters (D-dimensional analogs of a central charge, including operator ones) correspond to different models. The structure of these models is analogous to that of 2-dimensional conformal theories. Each model is specified by D-dimensional analog of null vector. The exact solutions of the simplest models of this type are examined. It is shown that these models are equivalent to Lagrangian models of scalar fields with a triple interaction. The values of dimensions of such fields are calculated, and the closed sets of differential equations for higher Green functions are derived. Copyright copyright 1996 Academic Press, Inc
Rosa-Garrido, Manuel; Chapski, Douglas J; Schmitt, Anthony D; Kimball, Todd H; Karbassi, Elaheh; Monte, Emma; Balderas, Enrique; Pellegrini, Matteo; Shih, Tsai-Ting; Soehalim, Elizabeth; Liem, David; Ping, Peipei; Galjart, Niels J; Ren, Shuxun; Wang, Yibin; Ren, Bing; Vondriska, Thomas M
2017-10-24
Cardiovascular disease is associated with epigenomic changes in the heart; however, the endogenous structure of cardiac myocyte chromatin has never been determined. To investigate the mechanisms of epigenomic function in the heart, genome-wide chromatin conformation capture (Hi-C) and DNA sequencing were performed in adult cardiac myocytes following development of pressure overload-induced hypertrophy. Mice with cardiac-specific deletion of CTCF (a ubiquitous chromatin structural protein) were generated to explore the role of this protein in chromatin structure and cardiac phenotype. Transcriptome analyses by RNA-seq were conducted as a functional readout of the epigenomic structural changes. Depletion of CTCF was sufficient to induce heart failure in mice, and human patients with heart failure receiving mechanical unloading via left ventricular assist devices show increased CTCF abundance. Chromatin structural analyses revealed interactions within the cardiac myocyte genome at 5-kb resolution, enabling examination of intra- and interchromosomal events, and providing a resource for future cardiac epigenomic investigations. Pressure overload or CTCF depletion selectively altered boundary strength between topologically associating domains and A/B compartmentalization, measurements of genome accessibility. Heart failure involved decreased stability of chromatin interactions around disease-causing genes. In addition, pressure overload or CTCF depletion remodeled long-range interactions of cardiac enhancers, resulting in a significant decrease in local chromatin interactions around these functional elements. These findings provide a high-resolution chromatin architecture resource for cardiac epigenomic investigations and demonstrate that global structural remodeling of chromatin underpins heart failure. The newly identified principles of endogenous chromatin structure have key implications for epigenetic therapy. © 2017 The Authors.
Miake-Lye, Isomi M; Hempel, Susanne; Shanman, Roberta; Shekelle, Paul G
2016-02-10
The need for systematic methods for reviewing evidence is continuously increasing. Evidence mapping is one emerging method. There are no authoritative recommendations for what constitutes an evidence map or what methods should be used, and anecdotal evidence suggests heterogeneity in both. Our objectives are to identify published evidence maps and to compare and contrast the presented definitions of evidence mapping, the domains used to classify data in evidence maps, and the form the evidence map takes. We conducted a systematic review of publications that presented results with a process termed "evidence mapping" or included a figure called an "evidence map." We identified publications from searches of ten databases through 8/21/2015, reference mining, and consulting topic experts. We abstracted the research question, the unit of analysis, the search methods and search period covered, and the country of origin. Data were narratively synthesized. Thirty-nine publications met inclusion criteria. Published evidence maps varied in their definition and the form of the evidence map. Of the 31 definitions provided, 67 % described the purpose as identification of gaps and 58 % referenced a stakeholder engagement process or user-friendly product. All evidence maps explicitly used a systematic approach to evidence synthesis. Twenty-six publications referred to a figure or table explicitly called an "evidence map," eight referred to an online database as the evidence map, and five stated they used a mapping methodology but did not present a visual depiction of the evidence. The principal conclusion of our evaluation of studies that call themselves "evidence maps" is that the implied definition of what constitutes an evidence map is a systematic search of a broad field to identify gaps in knowledge and/or future research needs that presents results in a user-friendly format, often a visual figure or graph, or a searchable database. Foundational work is needed to better
DEFF Research Database (Denmark)
Ryttov, Thomas Aaby; Sannino, Francesco
2010-01-01
fixed point. As a consistency check we recover the previously investigated bounds of the conformal windows when restricting to a single matter representation. The earlier conformal windows can be imagined to be part now of the new conformal house. We predict the nonperturbative anomalous dimensions...... at the infrared fixed points. We further investigate the effects of adding mass terms to the condensates on the conformal house chiral dynamics and construct the simplest instanton induced effective Lagrangian terms...
Why Map Issues? On Controversy Analysis as a Digital Method.
Marres, Noortje
2015-09-01
This article takes stock of recent efforts to implement controversy analysis as a digital method in the study of science, technology, and society (STS) and beyond and outlines a distinctive approach to address the problem of digital bias. Digital media technologies exert significant influence on the enactment of controversy in online settings, and this risks undermining the substantive focus of controversy analysis conducted by digital means. To address this problem, I propose a shift in thematic focus from controversy analysis to issue mapping. The article begins by distinguishing between three broad frameworks that currently guide the development of controversy analysis as a digital method, namely, demarcationist, discursive, and empiricist. Each has been adopted in STS, but only the last one offers a digital "move beyond impartiality." I demonstrate this approach by analyzing issues of Internet governance with the aid of the social media platform Twitter.
Al-Shawba, Altaf Abdulkarem; Gepreel, K. A.; Abdullah, F. A.; Azmi, A.
2018-06-01
In current study, we use the (G‧ / G) -expansion method to construct the closed form solutions of the seventh order time fractional Sawada-Kotera-Ito (TFSKI) equation based on conformable fractional derivative. As a result, trigonometric, hyperbolic and rational functions solutions with arbitrary constants are obtained. When the arbitrary constants are taken some special values, the periodic and soliton solutions are obtained from the travelling wave solutions. The obtained solutions are new and not found elsewhere. The effect of the fractional order on some of these solutions are represented graphically to illustrate the behavior of the exact solutions when the parameter take some special choose.
Seasonal comparison of two spatially distributed evapotranspiration mapping methods
Kisfaludi, Balázs; Csáki, Péter; Péterfalvi, József; Primusz, Péter
2017-04-01
More rainfall is disposed of through evapotranspiration (ET) on a global scale than through runoff and storage combined. In Hungary, about 90% of the precipitation evapotranspirates from the land and only 10% goes to surface runoff and groundwater recharge. Therefore, evapotranspiration is a very important element of the water balance, so it is a suitable parameter for the calibration of hydrological models. Monthly ET values of two MODIS-data based ET products were compared for the area of Hungary and for the vegetation period of the year 2008. The differences were assessed by land cover types and by elevation zones. One ET map was the MOD16, aiming at global coverage and provided by the MODIS Global Evaporation Project. The other method is called CREMAP, it was developed at the Budapest University of Technology and Economics for regional scale ET mapping. CREMAP was validated for the area of Hungary with good results, but ET maps were produced only for the period of 2000-2008. The aim of this research was to evaluate the performance of the MOD16 product compared to the CREMAP method. The average difference between the two products was the highest during summer, CREMAP estimating higher ET values by about 25 mm/month. In the spring and autumn, MOD16 ET values were higher by an average of 6 mm/month. The differences by land cover types showed a similar seasonal pattern to the average differences, and they correlated strongly with each other. Practically the same difference values could be calculated for arable lands and forests that together cover nearly 75% of the area of the country. Therefore, it can be said that the seasonal changes had the same effect on the two method's ET estimations in each land cover type areas. The analysis by elevation zones showed that on elevations lower than 200 m AMSL the trends of the difference values were similar to the average differences. The correlation between the values of these elevation zones was also strong. However weaker
Conformation analysis of trehalose. Molecular dynamics simulation and molecular mechanics
International Nuclear Information System (INIS)
Donnamaira, M.C.; Howard, E.I.; Grigera, J.R.
1992-09-01
Conformational analysis of the disaccharide trehalose is done by molecular dynamics and molecular mechanics. In spite of the different force fields used in each case, comparison between the molecular dynamics trajectories of the torsional angles of glycosidic linkage and energy conformational map shows a good agreement between both methods. By molecular dynamics it is observed a moderate mobility of the glycosidic linkage. The demands of computer time is comparable in both cases. (author). 6 refs, 4 figs
A novel method of S-box design based on chaotic map and composition method
International Nuclear Information System (INIS)
Lambić, Dragan
2014-01-01
Highlights: • Novel chaotic S-box generation method is presented. • Presented S-box has better cryptographic properties than other examples of chaotic S-boxes. • The advantages of the proposed method are the low complexity and large key space. -- Abstract: An efficient algorithm for obtaining random bijective S-boxes based on chaotic maps and composition method is presented. The proposed method is based on compositions of S-boxes from a fixed starting set. The sequence of the indices of starting S-boxes used is obtained by using chaotic maps. The results of performance test show that the S-box presented in this paper has good cryptographic properties. The advantages of the proposed method are the low complexity and the possibility to achieve large key space
Killing tensors and conformal Killing tensors from conformal Killing vectors
International Nuclear Information System (INIS)
Rani, Raffaele; Edgar, S Brian; Barnes, Alan
2003-01-01
Koutras has proposed some methods to construct reducible proper conformal Killing tensors and Killing tensors (which are, in general, irreducible) when a pair of orthogonal conformal Killing vectors exist in a given space. We give the completely general result demonstrating that this severe restriction of orthogonality is unnecessary. In addition, we correct and extend some results concerning Killing tensors constructed from a single conformal Killing vector. A number of examples demonstrate that it is possible to construct a much larger class of reducible proper conformal Killing tensors and Killing tensors than permitted by the Koutras algorithms. In particular, by showing that all conformal Killing tensors are reducible in conformally flat spaces, we have a method of constructing all conformal Killing tensors, and hence all the Killing tensors (which will in general be irreducible) of conformally flat spaces using their conformal Killing vectors
A taxonomy of behaviour change methods: an Intervention Mapping approach.
Kok, Gerjo; Gottlieb, Nell H; Peters, Gjalt-Jorn Y; Mullen, Patricia Dolan; Parcel, Guy S; Ruiter, Robert A C; Fernández, María E; Markham, Christine; Bartholomew, L Kay
2016-09-01
In this paper, we introduce the Intervention Mapping (IM) taxonomy of behaviour change methods and its potential to be developed into a coding taxonomy. That is, although IM and its taxonomy of behaviour change methods are not in fact new, because IM was originally developed as a tool for intervention development, this potential was not immediately apparent. Second, in explaining the IM taxonomy and defining the relevant constructs, we call attention to the existence of parameters for effectiveness of methods, and explicate the related distinction between theory-based methods and practical applications and the probability that poor translation of methods may lead to erroneous conclusions as to method-effectiveness. Third, we recommend a minimal set of intervention characteristics that may be reported when intervention descriptions and evaluations are published. Specifying these characteristics can greatly enhance the quality of our meta-analyses and other literature syntheses. In conclusion, the dynamics of behaviour change are such that any taxonomy of methods of behaviour change needs to acknowledge the importance of, and provide instruments for dealing with, three conditions for effectiveness for behaviour change methods. For a behaviour change method to be effective: (1) it must target a determinant that predicts behaviour; (2) it must be able to change that determinant; (3) it must be translated into a practical application in a way that preserves the parameters for effectiveness and fits with the target population, culture, and context. Thus, taxonomies of methods of behaviour change must distinguish the specific determinants that are targeted, practical, specific applications, and the theory-based methods they embody. In addition, taxonomies should acknowledge that the lists of behaviour change methods will be used by, and should be used by, intervention developers. Ideally, the taxonomy should be readily usable for this goal; but alternatively, it should be
Flood maps in Europe - methods, availability and use
de Moel, H.; van Alphen, J.; Aerts, J. C. J. H.
2009-03-01
To support the transition from traditional flood defence strategies to a flood risk management approach at the basin scale in Europe, the EU has adopted a new Directive (2007/60/EC) at the end of 2007. One of the major tasks which member states must carry out in order to comply with this Directive is to map flood hazards and risks in their territory, which will form the basis of future flood risk management plans. This paper gives an overview of existing flood mapping practices in 29 countries in Europe and shows what maps are already available and how such maps are used. Roughly half of the countries considered have maps covering as good as their entire territory, and another third have maps covering significant parts of their territory. Only five countries have very limited or no flood maps available yet. Of the different flood maps distinguished, it appears that flood extent maps are the most commonly produced floods maps (in 23 countries), but flood depth maps are also regularly created (in seven countries). Very few countries have developed flood risk maps that include information on the consequences of flooding. The available flood maps are mostly developed by governmental organizations and primarily used for emergency planning, spatial planning, and awareness raising. In spatial planning, flood zones delimited on flood maps mainly serve as guidelines and are not binding. Even in the few countries (e.g. France, Poland) where there is a legal basis to regulate floodplain developments using flood zones, practical problems are often faced which reduce the mitigating effect of such binding legislation. Flood maps, also mainly extent maps, are also created by the insurance industry in Europe and used to determine insurability, differentiate premiums, or to assess long-term financial solvency. Finally, flood maps are also produced by international river commissions. With respect to the EU Flood Directive, many countries already have a good starting point to map
Flood maps in Europe – methods, availability and use
Directory of Open Access Journals (Sweden)
J. C. J. H. Aerts
2009-03-01
Full Text Available To support the transition from traditional flood defence strategies to a flood risk management approach at the basin scale in Europe, the EU has adopted a new Directive (2007/60/EC at the end of 2007. One of the major tasks which member states must carry out in order to comply with this Directive is to map flood hazards and risks in their territory, which will form the basis of future flood risk management plans. This paper gives an overview of existing flood mapping practices in 29 countries in Europe and shows what maps are already available and how such maps are used. Roughly half of the countries considered have maps covering as good as their entire territory, and another third have maps covering significant parts of their territory. Only five countries have very limited or no flood maps available yet. Of the different flood maps distinguished, it appears that flood extent maps are the most commonly produced floods maps (in 23 countries, but flood depth maps are also regularly created (in seven countries. Very few countries have developed flood risk maps that include information on the consequences of flooding. The available flood maps are mostly developed by governmental organizations and primarily used for emergency planning, spatial planning, and awareness raising. In spatial planning, flood zones delimited on flood maps mainly serve as guidelines and are not binding. Even in the few countries (e.g. France, Poland where there is a legal basis to regulate floodplain developments using flood zones, practical problems are often faced which reduce the mitigating effect of such binding legislation. Flood maps, also mainly extent maps, are also created by the insurance industry in Europe and used to determine insurability, differentiate premiums, or to assess long-term financial solvency. Finally, flood maps are also produced by international river commissions. With respect to the EU Flood Directive, many countries already have a good starting
Directory of Open Access Journals (Sweden)
Mohammad Abuei Ardakan
2010-04-01
Full Text Available The present paper offers a basic introduction to data clustering and demonstrates the application of clustering methods in drawing maps of science. All approaches towards classification and clustering of information are briefly discussed. Their application to the process of visualization of conceptual information and drawing of science maps are illustrated by reviewing similar researches in this field. By implementing aggregated hierarchical clustering algorithm, which is an algorithm based on complete-link method, the map for urban management science as an emerging, interdisciplinary scientific field is analyzed and reviewed.
Mapping urban environmental noise: a land use regression method.
Xie, Dan; Liu, Yi; Chen, Jining
2011-09-01
Forecasting and preventing urban noise pollution are major challenges in urban environmental management. Most existing efforts, including experiment-based models, statistical models, and noise mapping, however, have limited capacity to explain the association between urban growth and corresponding noise change. Therefore, these conventional methods can hardly forecast urban noise at a given outlook of development layout. This paper, for the first time, introduces a land use regression method, which has been applied for simulating urban air quality for a decade, to construct an urban noise model (LUNOS) in Dalian Municipality, Northwest China. The LUNOS model describes noise as a dependent variable of surrounding various land areas via a regressive function. The results suggest that a linear model performs better in fitting monitoring data, and there is no significant difference of the LUNOS's outputs when applied to different spatial scales. As the LUNOS facilitates a better understanding of the association between land use and urban environmental noise in comparison to conventional methods, it can be regarded as a promising tool for noise prediction for planning purposes and aid smart decision-making.
A scalable hybrid multi-robot SLAM method for highly detailed maps
Pfingsthorn, M.; Slamet, B.; Visser, A.
2008-01-01
Recent successful SLAM methods employ hybrid map representations combining the strengths of topological maps and occupancy grids. Such representations often facilitate multi-agent mapping. In this paper, a successful SLAM method is presented, which is inspired by the manifold data structure by
This section provides information on: current laws, regulations and guidance, policy and technical guidance, project-level conformity, general information, contacts and training, adequacy review of SIP submissions
Electromagnetic methods for mapping freshwater lenses on Micronesian atoll islands
Anthony, S.S.
1992-01-01
The overall shape of freshwater lenses can be determined by applying electromagnetic methods and inverse layered-earth modeling to the mapping of atoll island freshwater lenses. Conductivity profiles were run across the width of the inhabited islands at Mwoakilloa, Pingelap, and Sapwuahfik atolls of the Pohnpei State, Federated States of Micronesia using a dual-loop, frequency-domain, electromagnetic profiling system. Six values of apparent conductivity were recorded at each sounding station and were used to interpret layer conductivities and/or thicknesses. A three-layer model that includes the unsaturated, freshwater, and saltwater zones was used to simulate apparent-conductivity data measured in the field. Interpreted results were compared with chloride-concentration data from monitoring wells and indicate that the interface between freshwater and saltwater layers, defined from electromagnetic data, is located in the upper part of the transition zone, where the chloride-concentration profile shows a rapid increase with depth. The electromagnetic method can be used to interpret the thickness of the freshwater between monitoring wells, but can not be used to interpret the thickness of freshwater from monitoring wells to the margin of an island. ?? 1992.
Methods in Mapping Usability of Malaysia’s Shopping Centre
Directory of Open Access Journals (Sweden)
Abdul Ghani Aida Affina
2016-01-01
Full Text Available With more than 200 number of shopping centre in Klang Valley itself, we the consumer actually have vast of choices. Instead of the machineries varieties from the lower class product till the posh one, each of those shopping centres eventually offers the typical product same as others. Those shopping centers are competing with each other and in great endeavour to attract more consumers, to visit and spend. As for the visitor, the typical product and boring ambience seems similar in all malls, and is looking something beyond the standard. Something that promising quality embedded in shopping centre which evokes the various emotions of the user along their journey in malls. This quality is known as usability. Usability; as defined generally is a global user’s experience response with product, environment, service or facilities. It is an assessment in extracting the qualities of shopping centre design. In mapping it, there are a few synthesizing methods to implement it. Therefore, this paper purposely to review the method that been used in usability of Malaysia’s shopping centre research with a few references on previous research done in usability assessment by predecessor’s scholars. With the accentuation on three elements that anchoring what the usability is: effectiveness, efficient and satisfaction, it is hope that this overview can lead other researcher in portraying its relationship with the quality and ‘user friendly’ design of shopping centre.
Method for Stereo Mapping Based on Objectarx and Pipeline Technology
Liu, F.; Chen, T.; Lin, Z.; Yang, Y.
2012-07-01
Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.
Ren, Jiyun; Menon, Geetha; Sloboda, Ron
2013-04-01
Although the Manchester system is still extensively used to prescribe dose in brachytherapy (BT) for locally advanced cervix cancer, many radiation oncology centers are transitioning to 3D image-guided BT, owing to the excellent anatomy definition offered by modern imaging modalities. As automatic dose optimization is highly desirable for 3D image-based BT, this study comparatively evaluates the performance of two optimization methods used in BT treatment planning—Nelder-Mead simplex (NMS) and simulated annealing (SA)—for a cervix BT computer simulation model incorporating a Manchester-style applicator. Eight model cases were constructed based on anatomical structure data (for high risk-clinical target volume (HR-CTV), bladder, rectum and sigmoid) obtained from measurements on fused MR-CT images for BT patients. D90 and V100 for HR-CTV, D2cc for organs at risk (OARs), dose to point A, conformation index and the sum of dwell times within the tandem and ovoids were calculated for optimized treatment plans designed to treat the HR-CTV in a highly conformal manner. Compared to the NMS algorithm, SA was found to be superior as it could perform optimization starting from a range of initial dwell times, while the performance of NMS was strongly dependent on their initial choice. SA-optimized plans also exhibited lower D2cc to OARs, especially the bladder and sigmoid, and reduced tandem dwell times. For cases with smaller HR-CTV having good separation from adjoining OARs, multiple SA-optimized solutions were found which differed markedly from each other and were associated with different choices for initial dwell times. Finally and importantly, the SA method yielded plans with lower dwell time variability compared with the NMS method.
International Nuclear Information System (INIS)
Frelin, A-M.; Fontbonne, J-M.; Ban, G.; Colin, J.; Labalme, M.; Batalla, A.; Vela, A.; Boher, P.; Braud, M.; Leroux, T.
2008-01-01
New radiation therapy techniques such as IMRT present significant efficiency due to their highly conformal dose distributions. A consequence of the complexity of their dose distributions (high gradients, small irradiation fields, low dose distribution, ...) is the requirement for better precision quality assurance than in classical radiotherapy in order to compare the conformation of the delivered dose with the planned dose distribution and to guarantee the quality of the treatment. Currently this control is mostly performed by matrices of ionization chambers, diode detectors, dosimetric films, portal imaging, or dosimetric gels. Another approach is scintillation dosimetry, which has been developed in the last 15 years mainly through scintillating fiber devices. Despite having many advantages over other methods it is still at an experimental level for routine dosimetry because the Cerenkov radiation produced under irradiation represents an important stem effect. A new 2D water equivalent scintillating dosimeter, the DosiMap, and two different Cerenkov discrimination methods were developed with the collaboration of the Laboratoire de Physique Corpusculaire of Caen, the Comprehensive Cancer Center Francois Baclesse, and the ELDIM Co., in the frame of the MAESTRO European project. The DosiMap consists of a plastic scintillating sheet placed inside a transparent polystyrene phantom. The light distribution produced under irradiation is recorded by a CCD camera. Our first Cerenkov discrimination technique is subtractive. It uses a chessboard pattern placed in front of the scintillator, which provides a background signal containing only Cerenkov light. Our second discrimination technique is colorimetric. It performs a spectral analysis of the light signal, which allows the unfolding of the Cerenkov radiation and the scintillation. Tests were carried out with our DosiMap prototype and the performances of the two discrimination methods were assessed. The comparison of the
Frelin, A M; Fontbonne, J M; Ban, G; Colin, J; Labalme, M; Batalla, A; Vela, A; Boher, P; Braud, M; Leroux, T
2008-05-01
New radiation therapy techniques such as IMRT present significant efficiency due to their highly conformal dose distributions. A consequence of the complexity of their dose distributions (high gradients, small irradiation fields, low dose distribution, ...) is the requirement for better precision quality assurance than in classical radiotherapy in order to compare the conformation of the delivered dose with the planned dose distribution and to guarantee the quality of the treatment. Currently this control is mostly performed by matrices of ionization chambers, diode detectors, dosimetric films, portal imaging, or dosimetric gels. Another approach is scintillation dosimetry, which has been developed in the last 15 years mainly through scintillating fiber devices. Despite having many advantages over other methods it is still at an experimental level for routine dosimetry because the Cerenkov radiation produced under irradiation represents an important stem effect. A new 2D water equivalent scintillating dosimeter, the DosiMap, and two different Cerenkov discrimination methods were developed with the collaboration of the Laboratoire de Physique Corpusculaire of Caen, the Comprehensive Cancer Center François Baclesse, and the ELDIM Co., in the frame of the MAESTRO European project. The DosiMap consists of a plastic scintillating sheet placed inside a transparent polystyrene phantom. The light distribution produced under irradiation is recorded by a CCD camera. Our first Cerenkov discrimination technique is subtractive. It uses a chessboard pattern placed in front of the scintillator, which provides a background signal containing only Cerenkov light. Our second discrimination technique is colorimetric. It performs a spectral analysis of the light signal, which allows the unfolding of the Cerenkov radiation and the scintillation. Tests were carried out with our DosiMap prototype and the performances of the two discrimination methods were assessed. The comparison of the
Directory of Open Access Journals (Sweden)
Nikolay Ivantchev
2013-10-01
Full Text Available Conformism was studied among 46 workers with different kinds of occupations by means of two modified scales measuring conformity by Santor, Messervey, and Kusumakar (2000 – scale for perceived peer pressure and scale for conformism in antisocial situations. The hypothesis of the study that workers’ conformism is expressed in a medium degree was confirmed partly. More than a half of the workers conform in a medium degree for taking risk, and for the use of alcohol and drugs, and for sexual relationships. More than a half of the respondents conform in a small degree for anti-social activities (like a theft. The workers were more inclined to conform for risk taking (10.9%, then – for the use of alcohol, drugs and for sexual relationships (8.7%, and in the lowest degree – for anti-social activities (6.5%. The workers who were inclined for the use of alcohol and drugs tended also to conform for anti-social activities.
Conformal geometry and invariants of 3-strand Brownian braids
International Nuclear Information System (INIS)
Nechaev, Sergei; Voituriez, Raphael
2005-01-01
We propose a simple geometrical construction of topological invariants of 3-strand Brownian braids viewed as world lines of 3 particles performing independent Brownian motions in the complex plane z. Our construction is based on the properties of conformal maps of doubly-punctured plane z to the universal covering surface. The special attention is paid to the case of indistinguishable particles. Our method of conformal maps allows us to investigate the statistical properties of the topological complexity of a bunch of 3-strand Brownian braids and to compute the expectation value of the irreducible braid length in the non-Abelian case
Microwave Wire Interrogation Method Mapping Pressure under High Temperatures
Directory of Open Access Journals (Sweden)
Xiaoyong Chen
2017-12-01
Full Text Available It is widely accepted that wireless reading for in-situ mapping of pressure under high-temperature environments is the most feasible method, because it is not subject to frequent heterogeneous jointing failures and electrical conduction deteriorating, or even disappearing, under heat load. However, in this article, we successfully demonstrate an in-situ pressure sensor with wire interrogation for high-temperature applications. In this proof-of-concept study of the pressure sensor, we used a microwave resonator as a pressure-sensing component and a microwave transmission line as a pressure characteristic interrogation tunnel. In the sensor, the line and resonator are processed into a monolith, avoiding a heterogeneous jointing failure; further, microwave signal transmission does not depend on electrical conduction, and consequently, the sensor does not suffer from the heat load. We achieve pressure monitoring under 400 °C when employing the sensor simultaneously. Our sensor avoids restrictions that exist in wireless pressure interrogations, such as environmental noise and interference, signal leakage and security, low transfer efficiency, and so on.
Jaiyong, Panichakorn; Bryce, Richard A
2017-06-14
Noncovalent functionalization of graphene by carbohydrates such as β-cyclodextrin (βCD) has the potential to improve graphene dispersibility and its use in biomedical applications. Here we explore the ability of approximate quantum chemical methods to accurately model βCD conformation and its interaction with graphene. We find that DFTB3, SCC-DFTB and PM3CARB-1 methods provide the best agreement with density functional theory (DFT) in calculation of relative energetics of gas-phase βCD conformers; however, the remaining NDDO-based approaches we considered underestimate the stability of the trans,gauche vicinal diol conformation. This diol orientation, corresponding to a clockwise hydrogen bonding arrangement in the glucosyl residue of βCD, is present in the lowest energy βCD conformer. Consequently, for adsorption on graphene of clockwise or counterclockwise hydrogen bonded forms of βCD, calculated with respect to this unbound conformer, the DFTB3 method provides closer agreement with DFT values than PM7 and PM6-DH2 approaches. These findings suggest approximate quantum chemical methods as potentially useful tools to guide the design of carbohydrate-graphene interactions, but also highlights the specific challenge to NDDO-based methods in capturing the relative energetics of carbohydrate hydrogen bond networks.
Deng, Nanjie; Zhang, Bin W; Levy, Ronald M
2015-06-09
The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.
Energy Technology Data Exchange (ETDEWEB)
Teo, P; Guo, K; Alayoubi, N; Kehler, K; Pistorius, S [CancerCare Manitoba, Winnipeg, MB (Canada)
2015-06-15
Purpose: Accounting for tumor motion during radiation therapy is important to ensure that the tumor receives the prescribed dose. Increasing the field size to account for this motion exposes the surrounding healthy tissues to unnecessary radiation. In contrast to using motion-encompassing techniques to treat moving tumors, conformal radiation therapy (RT) uses a smaller field to track the tumor and adapts the beam aperture according to the motion detected. This work investigates and compares the performance of three markerless, EPID based, optical flow methods to track tumor motion with conformal RT. Methods: Three techniques were used to track the motions of a 3D printed lung tumor programmed to move according to the tumor of seven lung cancer patients. These techniques utilized a multi-resolution optical flow algorithm as the core computation for image registration. The first method (DIR) registers the incoming images with an initial reference frame, while the second method (RFSF) uses an adaptive reference frame and the third method (CU) uses preceding image frames for registration. The patient traces and errors were evaluated for the seven patients. Results: The average position errors for all patient traces were 0.12 ± 0.33 mm, −0.05 ± 0.04 mm and −0.28 ± 0.44 mm for CU, DIR and RFSF method respectively. The position errors distributed within 1 standard deviation are 0.74 mm, 0.37 mm and 0.96 mm respectively. The CU and RFSF algorithms are sensitive to the characteristics of the patient trace and produce a wider distribution of errors amongst patients. Although the mean error for the DIR method is negatively biased (−0.05 mm) for all patients, it has the narrowest distribution of position error, which can be corrected using an offset calibration. Conclusion: Three techniques of image registration and position update were studied. Using direct comparison with an initial frame yields the best performance. The authors would like to thank Dr.YeLin Suh for
International Nuclear Information System (INIS)
Teo, P; Guo, K; Alayoubi, N; Kehler, K; Pistorius, S
2015-01-01
Purpose: Accounting for tumor motion during radiation therapy is important to ensure that the tumor receives the prescribed dose. Increasing the field size to account for this motion exposes the surrounding healthy tissues to unnecessary radiation. In contrast to using motion-encompassing techniques to treat moving tumors, conformal radiation therapy (RT) uses a smaller field to track the tumor and adapts the beam aperture according to the motion detected. This work investigates and compares the performance of three markerless, EPID based, optical flow methods to track tumor motion with conformal RT. Methods: Three techniques were used to track the motions of a 3D printed lung tumor programmed to move according to the tumor of seven lung cancer patients. These techniques utilized a multi-resolution optical flow algorithm as the core computation for image registration. The first method (DIR) registers the incoming images with an initial reference frame, while the second method (RFSF) uses an adaptive reference frame and the third method (CU) uses preceding image frames for registration. The patient traces and errors were evaluated for the seven patients. Results: The average position errors for all patient traces were 0.12 ± 0.33 mm, −0.05 ± 0.04 mm and −0.28 ± 0.44 mm for CU, DIR and RFSF method respectively. The position errors distributed within 1 standard deviation are 0.74 mm, 0.37 mm and 0.96 mm respectively. The CU and RFSF algorithms are sensitive to the characteristics of the patient trace and produce a wider distribution of errors amongst patients. Although the mean error for the DIR method is negatively biased (−0.05 mm) for all patients, it has the narrowest distribution of position error, which can be corrected using an offset calibration. Conclusion: Three techniques of image registration and position update were studied. Using direct comparison with an initial frame yields the best performance. The authors would like to thank Dr.YeLin Suh for
Concept mapping as a promising method to bring practice into science
van Bon, M.J.H.; van de Goor, L.A.M.; Holsappel, J.C.; Kuunders, T.J.M.; Jacobs-van der Bruggen, M.A.M.; te Brake, J.H.M.; van Oers, J.A.M.
2014-01-01
Objective Concept mapping is a method for developing a conceptual framework of a complex topic for use as a guide to evaluation or planning. In concept mapping, thoughts and ideas are represented in the form of a picture or map, the content of which is determined by a group of stakeholders. This
Conformal mapping on Riemann surfaces
Cohn, Harvey
2010-01-01
The subject matter loosely called ""Riemann surface theory"" has been the starting point for the development of topology, functional analysis, modern algebra, and any one of a dozen recent branches of mathematics; it is one of the most valuable bodies of knowledge within mathematics for a student to learn.Professor Cohn's lucid and insightful book presents an ideal coverage of the subject in five pans. Part I is a review of complex analysis analytic behavior, the Riemann sphere, geometric constructions, and presents (as a review) a microcosm of the course. The Riemann manifold is introduced in
Monte Carlo Methods Development and Applications in Conformational Sampling of Proteins
DEFF Research Database (Denmark)
Tian, Pengfei
quantitative insights into their thermodynamic and mechanistic properties that are difficult to probe in laboratory experiments. However, despite the rapid progress in the development of molecular simulation, there are still two limiting factors, (1), the current molecular mechanics force fields alone...... sampling methods to address these two problems. First of all, a novel technique has been developed for reliably estimating diffusion coefficients for use in the enhanced sampling of molecular simulations. A broad applicability of this method is illustrated by studying various simulation problems...
Conformal invariance in supergravity
International Nuclear Information System (INIS)
Bergshoeff, E.A.
1983-01-01
In this thesis the author explains the role of conformal invariance in supergravity. He presents the complete structure of extended conformal supergravity for N <= 4. The outline of this work is as follows. In chapter 2 he briefly summarizes the essential properties of supersymmetry and supergravity and indicates the use of conformal invariance in supergravity. The idea that the introduction of additional symmetry transformations can make clear the structure of a field theory is not reserved to supergravity only. By means of some simple examples it is shown in chapter 3 how one can always introduce additional gauge transformations in a theory of massive vector fields. Moreover it is shown how the gauge invariant formulation sometimes explains the quantum mechanical properties of the theory. In chapter 4 the author defines the conformal transformations and summarizes their main properties. He explains how these conformal transformations can be used to analyse the structure of gravity. The supersymmetric extension of these results is discussed in chapter 5. Here he describes as an example how N=1 supergravity can be reformulated in a conformally-invariant way. He also shows that beyond N=1 the gauge fields of the superconformal symmetries do not constitute an off-shell field representation of extended conformal supergravity. Therefore, in chapter 6, a systematic method to construct the off-shell formulation of all extended conformal supergravity theories with N <= 4 is developed. As an example he uses this method to construct N=1 conformal supergravity. Finally, in chapter 7 N=4 conformal supergravity is discussed. (Auth.)
Comparison of model reference and map based control method for vehicle stability enhancement
Baek, S.; Son, M.; Song, J.; Boo, K.; Kim, H.
2012-01-01
A map based controller method to improve a vehicle lateral stability is proposed in this study and compared with the conventional method, a model referenced controller. A model referenced controller to determine compensated yaw moment uses the sliding mode method, but the proposed map based
A Body of Work Standard-Setting Method with Construct Maps
Wyse, Adam E.; Bunch, Michael B.; Deville, Craig; Viger, Steven G.
2014-01-01
This article describes a novel variation of the Body of Work method that uses construct maps to overcome problems of transparency, rater inconsistency, and scores gaps commonly occurring with the Body of Work method. The Body of Work method with construct maps was implemented to set cut-scores for two separate K-12 assessment programs in a large…
Computational methods for constructing protein structure models from 3D electron microscopy maps.
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2013-10-01
Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
P. Li
2015-01-01
Full Text Available Composite material is widely used in the conformal load-bearing antenna structure (CLAS, and the manufacturing flaws in the packaging process of the CLAS will lead to the degradation of its wave-transparent property. For this problem, a novel inverse method of the flaw’s dimension by antenna-radome system’s far field data has been proposed. Two steps are included in the inversion: the first one is the inversion from the far filed data to the transmission coefficient of the CLAS’s radome; the second one is the inversion from the transmission coefficient to the flaw’s dimension. The inversion also has a good potential for the separable multilayer composite material radome. A 12.5 GHz CLAS with microstrip antenna array is used in the simulation, which indicates the effectiveness of the novel inversion method. Finally, the error analysis of the inversion method is presented by numerical simulation; the results is that the inversed error could be less than 10%, if the measurement error of far field data is less than 0.45 dB in amplitude and ±5° in phase.
The General Conformity requirements ensure that the actions taken by federal agencies in nonattainment and maintenance areas do not interfere with a state’s plans to meet national standards for air quality.
Frauendiener, J?rg
2000-01-01
The notion of conformal infinity has a long history within the research in Einstein's theory of gravity. Today, 'conformal infinity' is related to almost all other branches of research in general relativity, from quantisation procedures to abstract mathematical issues to numerical applications. This review article attempts to show how this concept gradually and inevitably evolved from physical issues, namely the need to understand gravitational radiation and isolated systems within the theory...
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
Energy Technology Data Exchange (ETDEWEB)
Sastre-Padro, Maria; Heide, Uulke A van der; Welleweerd, Hans [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2004-06-21
Because for IMRT treatments the required accuracy on leaf positioning is high, conventional calibration methods may not be appropriate. The aim of this study was to develop the tools for an accurate MLC calibration valid for conventional and IMRT treatments and to investigate the stability of the MLC. A strip test consisting of nine adjacent segments 2 cm wide, separated by 1 mm and exposed on Kodak X-Omat V films at D{sub max} depth, was used for detecting leaf-positioning errors. Dose profiles along the leaf-axis were taken for each leaf-pair. We measured the dose variation on each abutment to quantify the relative positioning error (RPE) and the absolute position of the abutment to quantify the absolute positioning error (APE). The accuracy of determining the APE and RPE was 0.15 and 0.04 mm, respectively. Using the RPE and the APE the MLC calibration parameters were calculated in order to obtain a flat profile on the abutment at the correct position. A conventionally calibrated Elekta MLC was re-calibrated using the strip test. The stability of the MLC and leaf-positioning reproducibility was investigated exposing films with 25 adjacent segments 1 cm wide during three months and measuring the standard deviation of the RPE values. A maximum shift over the three months of 0.27 mm was observed and the standard deviation of the RPE values was 0.11 mm.
International Nuclear Information System (INIS)
Sastre-Padro, Maria; Heide, Uulke A van der; Welleweerd, Hans
2004-01-01
Because for IMRT treatments the required accuracy on leaf positioning is high, conventional calibration methods may not be appropriate. The aim of this study was to develop the tools for an accurate MLC calibration valid for conventional and IMRT treatments and to investigate the stability of the MLC. A strip test consisting of nine adjacent segments 2 cm wide, separated by 1 mm and exposed on Kodak X-Omat V films at D max depth, was used for detecting leaf-positioning errors. Dose profiles along the leaf-axis were taken for each leaf-pair. We measured the dose variation on each abutment to quantify the relative positioning error (RPE) and the absolute position of the abutment to quantify the absolute positioning error (APE). The accuracy of determining the APE and RPE was 0.15 and 0.04 mm, respectively. Using the RPE and the APE the MLC calibration parameters were calculated in order to obtain a flat profile on the abutment at the correct position. A conventionally calibrated Elekta MLC was re-calibrated using the strip test. The stability of the MLC and leaf-positioning reproducibility was investigated exposing films with 25 adjacent segments 1 cm wide during three months and measuring the standard deviation of the RPE values. A maximum shift over the three months of 0.27 mm was observed and the standard deviation of the RPE values was 0.11 mm
Conformally connected universes
International Nuclear Information System (INIS)
Cantor, M.; Piran, T.
1983-01-01
A well-known difficulty associated with the conformal method for the solution of the general relativistic Hamiltonian constraint is the appearance of an aphysical ''bag of gold'' singularity at the nodal surface of the conformal factor. This happens whenever the background Ricci scalar is too large. Using a simple model, it is demonstrated that some of these singular solutions do have a physical meaning, and that these can be considered as initial data for Universe containing black holes, which are connected, in a conformally nonsingular way with each other. The relation between the ADM mass and the horizon area in this solution supports the cosmic censorship conjecture. (author)
Optical spectroscopic methods for probing the conformational stability of immobilised enzymes.
Ganesan, Ashok; Moore, Barry D; Kelly, Sharon M; Price, Nicholas C; Rolinski, Olaf J; Birch, David J S; Dunkin, Ian R; Halling, Peter J
2009-07-13
We report the development of biophysical techniques based on circular dichroism (CD), diffuse reflectance infrared Fourier transform (DRIFT) and tryptophan (Trp) fluorescence to investigate in situ the structure of enzymes immobilised on solid particles. Their applicability is demonstrated using subtilisin Carlsberg (SC) immobilised on silica gel and Candida antartica lipase B immobilised on Lewatit VP.OC 1600 (Novozyme 435). SC shows nearly identical secondary structure in solution and in the immobilised state as evident from far UV CD spectra and amide I vibration bands. Increased near UV CD intensity and reduced Trp fluorescence suggest a more rigid tertiary structure on the silica surface. After immobilised SC is inactivated, these techniques reveal: a) almost complete loss of near UV CD signal, suggesting loss of tertiary structure; b) a shift in the amide I vibrational band from 1658 cm(-1) to 1632 cm(-1), indicating a shift from alpha-helical structure to beta-sheet; c) a substantial blue shift and reduced dichroism in the far UV CD, supporting a shift to beta-sheet structure; d) strong increase in Trp fluorescence intensity, which reflects reduced intramolecular quenching with loss of tertiary structure; and e) major change in fluorescence lifetime distribution, confirming a substantial change in Trp environment. DRIFT measurements suggest that pressing KBr discs may perturb protein structure. With the enzyme on organic polymer it was possible to obtain near UV CD spectra free of interference by the carrier material. However, far UV CD, DRIFT and fluorescence measurements showed strong signals from the organic support. In conclusion, the spectroscopic methods described here provide structural information hitherto inaccessible, with their applicability limited by interference from, rather than the particulate nature of, the support material.
Lin, Da; Hong, Ping; Zhang, Siheng; Xu, Weize; Jamal, Muhammad; Yan, Keji; Lei, Yingying; Li, Liang; Ruan, Yijun; Fu, Zhen F; Li, Guoliang; Cao, Gang
2018-05-01
Chromosome conformation capture (3C) technologies can be used to investigate 3D genomic structures. However, high background noise, high costs, and a lack of straightforward noise evaluation in current methods impede the advancement of 3D genomic research. Here we developed a simple digestion-ligation-only Hi-C (DLO Hi-C) technology to explore the 3D landscape of the genome. This method requires only two rounds of digestion and ligation, without the need for biotin labeling and pulldown. Non-ligated DNA was efficiently removed in a cost-effective step by purifying specific linker-ligated DNA fragments. Notably, random ligation could be quickly evaluated in an early quality-control step before sequencing. Moreover, an in situ version of DLO Hi-C using a four-cutter restriction enzyme has been developed. We applied DLO Hi-C to delineate the genomic architecture of THP-1 and K562 cells and uncovered chromosomal translocations. This technology may facilitate investigation of genomic organization, gene regulation, and (meta)genome assembly.
Directory of Open Access Journals (Sweden)
Klin-eam Chakkrid
2009-01-01
Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.
Conformation radiotherapy and conformal radiotherapy
International Nuclear Information System (INIS)
Morita, Kozo
1999-01-01
In order to coincide the high dose region to the target volume, the 'Conformation Radiotherapy Technique' using the multileaf collimator and the device for 'hollow-out technique' was developed by Prof. S. Takahashi in 1960. This technique can be classified a type of 2D-dynamic conformal RT techniques. By the clinical application of this technique, the late complications of the lens, the intestine and the urinary bladder after radiotherapy for the maxillary cancer and the cervical cancer decreased. Since 1980's the exact position and shape of the tumor and the surrounding normal tissues can be easily obtained by the tremendous development of the CT/MRI imaging technique. As a result, various kinds of new conformal techniques such as the 3D-CRT, the dose intensity modulation, the tomotherapy have been developed since the beginning of 1990'. Several 'dose escalation study with 2D-/3D conformal RT' is now under way to improve the treatment results. (author)
A New General Iterative Method for a Finite Family of Nonexpansive Mappings in Hilbert Spaces
Directory of Open Access Journals (Sweden)
Singthong Urailuk
2010-01-01
Full Text Available We introduce a new general iterative method by using the -mapping for finding a common fixed point of a finite family of nonexpansive mappings in the framework of Hilbert spaces. A strong convergence theorem of the purposed iterative method is established under some certain control conditions. Our results improve and extend the results announced by many others.
Method to map individual electromagnetic field components inside a photonic crystal
Denis, T.; Reijnders, B.; Lee, J.H.H.; van der Slot, Petrus J.M.; Vos, Willem L.; Boller, Klaus J.
2012-01-01
We present a method to map the absolute electromagnetic field strength inside photonic crystals. We apply the method to map the dominant electric field component Ez of a two-dimensional photonic crystal slab at microwave frequencies. The slab is placed between two mirrors to select Bloch standing
Conformal group actions and Segal's cosmology
International Nuclear Information System (INIS)
Werth, J.-E.
1984-01-01
A mathematical description of Segal's cosmological model in the framework of conformal group actions is presented. The relation between conformal and causal group actions on time-orientable Lorentzian manifolds is analysed and several examples are discussed. A criterion for the conformality of a map between Lorentzian manifolds is given. The results are applied to Segal's 'conformal compactification' of Minkowski space. Furthermore, the 'unitary formulation' of Segal's cosmology is regarded. (Author) [pt
Concept mapping as an empowering method to promote learning, thinking, teaching and research
Directory of Open Access Journals (Sweden)
Mauri Kalervo Åhlberg
2013-01-01
Full Text Available Results and underpinning of over twenty years of research and development program of concept mapping is presented. Different graphical knowledge presentation tools, especially concept mapping and mind mapping, are compared. There are two main dimensions that differentiate graphical knowledge presentation methods: The first dimension is conceptual explicitness: from mere concepts to flexibly named links and clear propositions in concept maps. The second dimension in the classification system I am suggesting is whether there are pictures or not. Åhlbergʼs and his research groupʼs applications and developments of Novakian concept maps are compared to traditional Novakian concept maps. The main innovations include always using arrowheads to show direction of reading the concept map. Centrality of each concept is estimated from number of links to other concepts. In our empirical research over two decades, number of relevant concepts, and number of relevant propositions in studentsʼ concept maps, have been found to be the best indicators and predictors of meaningful learning. This is used in assessment of learning. Improved concept mapping is presented as a tool to analyze texts. The main innovation is numbering the links to show order of reading the concept map and to make it possible to transform concept map back to the original prose text as closely as possible. In Åhlberg and his research groupʼs research, concept mapping has been tested in all main phases of research, teaching and learning.
Conformal expansions and renormalons
Energy Technology Data Exchange (ETDEWEB)
Rathsman, J.
2000-02-07
The coefficients in perturbative expansions in gauge theories are factorially increasing, predominantly due to renormalons. This type of factorial increase is not expected in conformal theories. In QCD conformal relations between observables can be defined in the presence of a perturbative infrared fixed-point. Using the Banks-Zaks expansion the authors study the effect of the large-order behavior of the perturbative series on the conformal coefficients. The authors find that in general these coefficients become factorially increasing. However, when the factorial behavior genuinely originates in a renormalon integral, as implied by a postulated skeleton expansion, it does not affect the conformal coefficients. As a consequence, the conformal coefficients will indeed be free of renormalon divergence, in accordance with previous observations concerning the smallness of these coefficients for specific observables. The authors further show that the correspondence of the BLM method with the skeleton expansion implies a unique scale-setting procedure. The BLM coefficients can be interpreted as the conformal coefficients in the series relating the fixed-point value of the observable with that of the skeleton effective charge. Through the skeleton expansion the relevance of renormalon-free conformal coefficients extends to real-world QCD.
International Nuclear Information System (INIS)
Hooft, G.
2012-01-01
The dynamical degree of freedom for the gravitational force is the metric tensor, having 10 locally independent degrees of freedom (of which 4 can be used to fix the coordinate choice). In conformal gravity, we split this field into an overall scalar factor and a nine-component remainder. All unrenormalizable infinities are in this remainder, while the scalar component can be handled like any other scalar field such as the Higgs field. In this formalism, conformal symmetry is spontaneously broken. An imperative demand on any healthy quantum gravity theory is that black holes should be described as quantum systems with micro-states as dictated by the Hawking-Bekenstein theory. This requires conformal symmetry that may be broken spontaneously but not explicitly, and this means that all conformal anomalies must cancel out. Cancellation of conformal anomalies yields constraints on the matter sector as described by some universal field theory. Thus black hole physics may eventually be of help in the construction of unified field theories. (author)
International Nuclear Information System (INIS)
Ma Songhua; Fang Jianping; Zheng Chunlong
2009-01-01
By means of an extended mapping method and a variable separation method, a series of solitary wave solutions, periodic wave solutions and variable separation solutions to the (2 + 1)-dimensional breaking soliton system is derived.
THE METHOD OF MULTIPLE SPATIAL PLANNING BASIC MAP
Directory of Open Access Journals (Sweden)
C. Zhang
2018-04-01
Full Text Available The “Provincial Space Plan Pilot Program” issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve “multi-compliance” of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.
The Method of Multiple Spatial Planning Basic Map
Zhang, C.; Fang, C.
2018-04-01
The "Provincial Space Plan Pilot Program" issued in December 2016 pointed out that the existing space management and control information management platforms of various departments were integrated, and a spatial planning information management platform was established to integrate basic data, target indicators, space coordinates, and technical specifications. The planning and preparation will provide supportive decision support, digital monitoring and evaluation of the implementation of the plan, implementation of various types of investment projects and space management and control departments involved in military construction projects in parallel to approve and approve, and improve the efficiency of administrative approval. The space planning system should be set up to delimit the control limits for the development of production, life and ecological space, and the control of use is implemented. On the one hand, it is necessary to clarify the functional orientation between various kinds of planning space. On the other hand, it is necessary to achieve "multi-compliance" of various space planning. Multiple spatial planning intergration need unified and standard basic map(geographic database and technical specificaton) to division of urban, agricultural, ecological three types of space and provide technical support for the refinement of the space control zoning for the relevant planning. The article analysis the main space datum, the land use classification standards, base map planning, planning basic platform main technical problems. Based on the geographic conditions, the results of the census preparation of spatial planning map, and Heilongjiang, Hainan many rules combined with a pilot application.
Empirical evaluation of a practical indoor mobile robot navigation method using hybrid maps
DEFF Research Database (Denmark)
Özkil, Ali Gürcan; Fan, Zhun; Xiao, Jizhong
2010-01-01
This video presents a practical navigation scheme for indoor mobile robots using hybrid maps. The method makes use of metric maps for local navigation and a topological map for global path planning. Metric maps are generated as occupancy grids by a laser range finder to represent local information...... about partial areas. The global topological map is used to indicate the connectivity of the ‘places-of-interests’ in the environment and the interconnectivity of the local maps. Visual tags on the ceiling to be detected by the robot provide valuable information and contribute to reliable localization...... that the method is implemented successfully on physical robot in a hospital environment, which provides a practical solution for indoor navigation....
King, Nathan D.; Ruuth, Steven J.
2017-05-01
Maps from a source manifold M to a target manifold N appear in liquid crystals, color image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds M and N to the simpler problems of solving a PDE on M and projecting to the closest points on N. In our approach, an embedding PDE is formulated in the embedding space using closest point representations of M and N. This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes p-harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and p-harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance color images.
Comparative analysis of various methods for modelling permanent magnet machines
Ramakrishnan, K.; Curti, M.; Zarko, D.; Mastinu, G.; Paulides, J.J.H.; Lomonova, E.A.
2017-01-01
In this paper, six different modelling methods for permanent magnet (PM) electric machines are compared in terms of their computational complexity and accuracy. The methods are based primarily on conformal mapping, mode matching, and harmonic modelling. In the case of conformal mapping, slotted air
Conformational analysis by intersection: CONAN.
Smellie, Andrew; Stanton, Robert; Henne, Randy; Teig, Steve
2003-01-15
As high throughput techniques in chemical synthesis and screening improve, more demands are placed on computer assisted design and virtual screening. Many of these computational methods require one or more three-dimensional conformations for molecules, creating a demand for a conformational analysis tool that can rapidly and robustly cover the low-energy conformational spaces of small molecules. A new algorithm of intersection is presented here, which quickly generates (on average heuristics are applied after intersection to generate a small representative collection of conformations that span the conformational space. In a study of approximately 97,000 randomly selected molecules from the MDDR, results are presented that explore these conformations and their ability to cover low-energy conformational space. Copyright 2002 Wiley Periodicals, Inc. J Comput Chem 24: 10-20, 2003
Indoor Map Acquisition System Using Global Scan Matching Method and Laser Range Scan Data
Hisanaga, Satoshi; Kase, Takaaki
Simultaneous localization and mapping (SLAM) is the latest technique for constructing indoor maps. In indoor environment, a localization method using the features of the walls as landmarks has been studied in the past. The past study has a drawback. It cannot localize in spaces surrounded by featureless walls or walls on which similar features are repeated. To overcome this drawback, we developed an accuracy localization method that ignores the features of the walls. We noted the fact that the walls in a building are aligned along only two orthogonal directions. By considering a specific wall to be a reference wall, the location of a robot was expressed by using the distance between the robot and the reference wall. We developed the robot in order to evaluate the mapping accuracy of our method and carried out an experiment to map a corridor (40m long) that contained featureless parts. The map obtained had a margin of error of less than 2%.
Steepest descent method for set-valued locally accretive mappings
International Nuclear Information System (INIS)
Chidume, C.E.
1993-05-01
Let E be a real q-uniformly smooth Banach space. Suppose T is a set-valued locally strongly accretive map with open domain D(T) in E and that 0 is an element of Tx has a solution x* in D(T). Then there exists a neighbourhood B in D(T) of x* and a real number r 1 >0 such that for any r>r 1 and some real sequence {c n }, any initial guess x 1 is an element of B and any single-valued selection T 0 of T, the sequence {x n } generated from x 1 by x n+1 =x n -c n T 0 x n , n≥1, remains in D(T) and converges strongly to x* with ||x n -x*|| O(n -(q-1)/ q). A related result deals with iterative approximation of a solution of the equation f is an element of x+Ax when A is a locally accretive map. Our theorems generalize important known results and resolve a problem of interest. (author). 39 refs
Interpolation methods for creating a scatter radiation exposure map
Energy Technology Data Exchange (ETDEWEB)
Gonçalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil); Gomes, Celio S.; Lopes, Ricardo T. [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F. [Universidade do Estado do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Física
2017-07-01
A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)
Interpolation methods for creating a scatter radiation exposure map
International Nuclear Information System (INIS)
Gonçalves, Elicardo A. de S.; Gomes, Celio S.; Lopes, Ricardo T.; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F.
2017-01-01
A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)
Two New Iterative Methods for a Countable Family of Nonexpansive Mappings in Hilbert Spaces
Directory of Open Access Journals (Sweden)
Hu Changsong
2010-01-01
Full Text Available We consider two new iterative methods for a countable family of nonexpansive mappings in Hilbert spaces. We proved that the proposed algorithms strongly converge to a common fixed point of a countable family of nonexpansive mappings which solves the corresponding variational inequality. Our results improve and extend the corresponding ones announced by many others.
Methodical Aspects of Applying Strategy Map in an Organization
Piotr Markiewicz
2013-01-01
One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC). The method was c...
Method for Pre-Conditioning a Measured Surface Height Map for Model Validation
Sidick, Erkin
2012-01-01
This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to
International Nuclear Information System (INIS)
Minkin, K.; Tanova, R.; Busarski, A.; Penkov, M.; Penev, L.; Hadjidekov, V.
2009-01-01
Modern neurosurgery requires accurate preoperative and intraoperative localization of brain pathologies but also of brain functions. The presence of individual variations in healthy subjects and the shift of brain functions in brain diseases provoke the introduction of various methods for brain mapping. The aim of this paper was to analyze the most widespread methods for brain mapping: Wada-test, functional magnetic resonance imaging (fMRI) and intraoperative direct electrical stimulation (DES). This study included 4 patients with preoperative brain mapping using Wada-test and fMRI. Intraoperative mapping with DES during awake craniotomy was performed in one case. The histopathological diagnosis was low-grade glioma in 2 cases, cortical dysplasia (1 patient) and arteriovenous malformation (1 patient). The brain mapping permits total lesion resection in three of four patients. There was no new postoperative deficit despite surgery near or within functional brain areas. Brain plasticity provoking shift of eloquent areas from their usual locations was observed in two cases. The brain mapping methods allow surgery in eloquent brain areas recognized in the past as 'forbidden areas'. Each method has advantages and disadvantages. The precise location of brain functions and pathologies frequently requires combination of different brain mapping methods. (authors)
Matching methods to produce maps for pest risk analysis to resources
Directory of Open Access Journals (Sweden)
Richard Baker
2013-09-01
Full Text Available Decision support systems (DSSs for pest risk mapping are invaluable for guiding pest risk analysts seeking to add maps to pest risk analyses (PRAs. Maps can help identify the area of potential establishment, the area at highest risk and the endangered area for alien plant pests. However, the production of detailed pest risk maps may require considerable time and resources and it is important to match the methods employed to the priority, time and detail required. In this paper, we apply PRATIQUE DSSs to Phytophthora austrocedrae, a pathogen of the Cupressaceae, Thaumetopoea pityocampa, the pine processionary moth, Drosophila suzukii, spotted wing Drosophila, and Thaumatotibia leucotreta, the false codling moth. We demonstrate that complex pest risk maps are not always a high priority and suggest that simple methods may be used to determine the geographic variation in relative risks posed by invasive alien species within an area of concern.
Mapping Norway - a Method to Register and Survey the Status of Accessibility
Michaelis, Sven; Bögelsack, Kathrin
2018-05-01
The Norwegian mapping authority has developed a standard method for mapping accessibility mostly for people with limited or no walking abilities in urban and recreational areas. We choose an object-orientated approach where points, lines and polygons represents objects in the environment. All data are stored in a geospatial database, so they can be presented as web map and analyzed using GIS software. By the end of 2016 more than 160 municipalities are mapped using that method. The aim of this project is to establish a national standard for mapping and to provide a geodatabase that shows the status of accessibility throughout Norway. The data provide a useful tool for national statistics, local planning authorities and private users. First results show that accessibility is low and Norway still faces many challenges to meet the government's goals for Universal Design.
DEFF Research Database (Denmark)
Heegaard, Niels H H; Rovatti, Luca; Nissen, Mogens H
2003-01-01
The small (Mr = 11729) serum protein beta2-microglobulin is prone to precipitate as amyloid in a protein conformational disorder (PCD) that occurs in a significant number of patients on chronic hemodialysis. Analyses by capillary electrophoresis (CE) were undertaken to study beta2-microglobulin...
International Nuclear Information System (INIS)
Kaplan, David B.; Lee, Jong-Wan; Son, Dam T.; Stephanov, Mikhail A.
2009-01-01
We consider zero-temperature transitions from conformal to nonconformal phases in quantum theories. We argue that there are three generic mechanisms for the loss of conformality in any number of dimensions: (i) fixed point goes to zero coupling, (ii) fixed point runs off to infinite coupling, or (iii) an IR fixed point annihilates with a UV fixed point and they both disappear into the complex plane. We give both relativistic and nonrelativistic examples of the last case in various dimensions and show that the critical behavior of the mass gap behaves similarly to the correlation length in the finite temperature Berezinskii-Kosterlitz-Thouless (BKT) phase transition in two dimensions, ξ∼exp(c/|T-T c | 1/2 ). We speculate that the chiral phase transition in QCD at large number of fermion flavors belongs to this universality class, and attempt to identify the UV fixed point that annihilates with the Banks-Zaks fixed point at the lower end of the conformal window.
A facile method to compare EFTEM maps obtained from materials changing composition over time
Casu, Alberto
2015-10-31
Energy Filtered Transmission Electron Microscopy (EFTEM) is an analytical tool that has been successfully and widely employed in the last two decades for obtaining fast elemental maps in TEM mode. Several studies and efforts have been addressed to investigate limitations and advantages of such technique, as well as to improve the spatial resolution of compositional maps. Usually, EFTEM maps undergo post-acquisition treatments by changing brightness and contrast levels, either via dedicated software or via human elaboration, in order to maximize their signal-to-noise ratio and render them as visible as possible. However, elemental maps forming a single set of EFTEM images are usually subjected to independent map-by-map image treatment. This post-acquisition step becomes crucial when analyzing materials that change composition over time as a consequence of an external stimulus, because the map-by-map approach doesn\\'t take into account how the chemical features of the imaged materials actually progress, in particular when the investigated elements exhibit very low signals. In this article, we present a facile procedure applicable to whole sets of EFTEM maps acquired on a sample that is evolving over time. The main aim is to find a common method to treat the images features, in order to make them as comparable as possible without affecting the information there contained. Microsc. Res. Tech. 78:1090–1097, 2015. © 2015 Wiley Periodicals, Inc.
Hybrid Map-Based Navigation Method for Unmanned Ground Vehicle in Urban Scenario
Directory of Open Access Journals (Sweden)
Huiyan Chen
2013-07-01
Full Text Available To reduce the data size of metric map and map matching computational cost in unmanned ground vehicle self-driving navigation in urban scenarios, a metric-topological hybrid map navigation system is proposed in this paper. According to the different positioning accuracy requirements, urban areas are divided into strong constraint (SC areas, such as roads with lanes, and loose constraint (LC areas, such as intersections and open areas. As direction of the self-driving vehicle is provided by traffic lanes and global waypoints in the road network, a simple topological map is fit for the navigation in the SC areas. While in the LC areas, the navigation of the self-driving vehicle mainly relies on the positioning information. Simultaneous localization and mapping technology is used to provide a detailed metric map in the LC areas, and a window constraint Markov localization algorithm is introduced to achieve accurate position using laser scanner. Furthermore, the real-time performance of the Markov algorithm is enhanced by using a constraint window to restrict the size of the state space. By registering the metric maps into the road network, a hybrid map of the urban scenario can be constructed. Real unmanned vehicle mapping and navigation tests demonstrated the capabilities of the proposed method.
A facile method to compare EFTEM maps obtained from materials changing composition over time
Casu, Alberto; Genovese, Alessandro; Di Benedetto, Cristiano; Lentijo Mozo, Sergio; Sogne, Elisa; Zuddas, Efisio; Falqui, Andrea
2015-01-01
Energy Filtered Transmission Electron Microscopy (EFTEM) is an analytical tool that has been successfully and widely employed in the last two decades for obtaining fast elemental maps in TEM mode. Several studies and efforts have been addressed to investigate limitations and advantages of such technique, as well as to improve the spatial resolution of compositional maps. Usually, EFTEM maps undergo post-acquisition treatments by changing brightness and contrast levels, either via dedicated software or via human elaboration, in order to maximize their signal-to-noise ratio and render them as visible as possible. However, elemental maps forming a single set of EFTEM images are usually subjected to independent map-by-map image treatment. This post-acquisition step becomes crucial when analyzing materials that change composition over time as a consequence of an external stimulus, because the map-by-map approach doesn't take into account how the chemical features of the imaged materials actually progress, in particular when the investigated elements exhibit very low signals. In this article, we present a facile procedure applicable to whole sets of EFTEM maps acquired on a sample that is evolving over time. The main aim is to find a common method to treat the images features, in order to make them as comparable as possible without affecting the information there contained. Microsc. Res. Tech. 78:1090–1097, 2015. © 2015 Wiley Periodicals, Inc.
A Method for Mapping Future Urbanization in the United States
Directory of Open Access Journals (Sweden)
Lahouari Bounoua
2018-04-01
Full Text Available Cities are poised to absorb additional people. Their sustainability, or ability to accommodate a population increase without depleting resources or compromising future growth, depends on whether they harness the efficiency gains from urban land management. Population is often projected as a bulk national number without details about spatial distribution. We use Landsat and population data in a methodology to project and map U.S. urbanization for the year 2020 and document its spatial pattern. This methodology is important to spatially disaggregate projected population and assist land managers to monitor land use, assess infrastructure and distribute resources. We found the U.S. west coast urban areas to have the fastest population growth with relatively small land consumption resulting in future decrease in per capita land use. Except for Miami (FL, most other U.S. large urban areas, especially in the Midwest, are growing spatially faster than their population and inadvertently consuming land needed for ecosystem services. In large cities, such as New York, Chicago, Houston and Miami, land development is expected more in suburban zones than urban cores. In contrast, in Los Angeles land development within the city core is greater than in its suburbs.
Conformational analysis of lignin models
International Nuclear Information System (INIS)
Santos, Helio F. dos
2001-01-01
The conformational equilibrium for two 5,5' biphenyl lignin models have been analyzed using a quantum mechanical semiempirical method. The gas phase and solution structures are discussed based on the NMR and X-ray experimental data. The results obtained showed that the observed conformations are solvent-dependent, being the geometries and the thermodynamic properties correlated with the experimental information. This study shows how a systematic theoretical conformational analysis can help to understand chemical processes at a molecular level. (author)
Directory of Open Access Journals (Sweden)
Suhua Zhou
2016-04-01
Full Text Available The development of landslide susceptibility maps is of great importance due to rapid urbanization. The purpose of this study is to present a method to integrate the subjective weight with objective weight for regional landslide susceptibility mapping on the geographical information system (GIS platform. The analytical hierarchy process (AHP, which is subjective, was employed to weight predictive factors’ contribution to landslide occurrence. The frequency ratio (FR method, which is objective, was used to derive subclasses’ frequency ratio with respect to landslides that indicate the relative importance of a subclass within each predictive factor. A case study was carried out at Tsushima Island, Japan, using a historical inventory of 534 landslides and seven predictive factors: elevation, slope, aspect, terrain roughness index (TRI, lithology, land cover and mean annual precipitation (MAP. The landslide susceptibility index (LSI was calculated using the weighted linear combination of factors’ weights and subclasses’ weights. The study area was classified into five susceptibility zones according to the LSI. In addition, the produced susceptibility map was compared with maps generated using the conventional FR and AHP method and validated using the relative landslide index (RLI. The validation result showed that the proposed method performed better than the conventional application of the FR method and AHP method. The obtained landslide susceptibility maps could serve as a scientific basis for urban planning and landslide hazard management.
Comparison of association mapping methods in a complex pedigreed population
DEFF Research Database (Denmark)
Sahana, Goutam; Guldbrandtsen, Bernt; Janss, Luc
2010-01-01
to collect SNP signals in intervals, to avoid the scattering of a QTL signal over multiple neighboring SNPs. Methods not accounting for genetic background (full pedigree information) performed worse, and methods using haplotypes were considerably worse with a high false-positive rate, probably due...... to the presence of low-frequency haplotypes. It was necessary to account for full relationships among individuals to avoid excess false discovery. Although the methods were tested on a cattle pedigree, the results are applicable to any population with a complex pedigree structure...
A Fast and Specific Alignment Method for Minisatellite Maps
Directory of Open Access Journals (Sweden)
Eric Rivals
2006-01-01
Full Text Available Background: Variable minisatellites count among the most polymorphic markers of eukaryotic and prokaryotic genomes. This variability can affect gene coding regions, like in the prion protein gene, or gene regulation regions, like for the cystatin B gene, and be associated or implicated in diseases: the Creutzfeld-Jakob disease and the myoclonus epilepsy type 1, for our examples. When it affects neutrally evolving regions, the polymorphism in length (i.e., in number of copies of minisatellites proved useful in population genetics.Motivation: In these tandem repeat sequences, different mutational mechanisms let the number of copies, as well as the copies themselves, vary. Especially, the interspersion of events of tandem duplication/contraction and of punctual mutation makes the succession of variant repeats much more informative than the sole allele length. To exploit this information requires the ability to align minisatellite alleles by accounting for both punctual mutations and tandem duplications.Results: We propose a minisatellite maps alignment program that improves on previous solutions. Our new program is faster, simpler, considers an extended evolutionary model, and is available to the community. We test it on the data set of 609 alleles of the MSY1 (DYF155S1 human minisatellite and confirm its ability to recover known evolutionary signals. Our experiments highlight that the informativeness of minisatellites resides in their length and composition polymorphisms. Exploiting both simultaneously is critical to unravel the implications of variable minisatellites in the control of gene expression and diseases.Availability: Software is available at http://atgc.lirmm.fr/ms_align/
Methods of Measuring and Mapping of Landslide Areas
Skrzypczak, Izabela; Kokoszka, Wanda; Kogut, Janusz; Oleniacz, Grzegorz
2017-12-01
The problem of attracting new investment areas and the inability of current zoning areas, allows us to understand why it is impossible to completely rule out building on landslide areas. Therefore, it becomes important issue of monitoring areas at risk of landslides. Only through appropriate monitoring and proper development of measurements resulting as maps of areas at risk of landslides enables us to estimate the risk and the relevant economic calculation for the realization of the anticipated investment in such areas. The results of monitoring of the surface and in-depth of the landslides are supplemented with constant observation of precipitation. The previous analyses and monitoring of landslides show that some of them are continuously active. GPS measurements, especially with laser scanning provide a unique activity data acquired on the surface of each individual landslide. The development of high resolution numerical models of terrain and the creation of differential models based on subsequent measurements, informs us about the size of deformation, both in units of distance (displacements) and volume. The compatibility of the data with information from in-depth monitoring allows the generation of a very reliable in-depth model of landslide, and as a result proper calculation of the volume of colluvium. Programs presented in the article are a very effective tool to generate in-depth model of landslide. In Poland, the steps taken under the SOPO project i.e. the monitoring and description of landslides are absolutely necessary for social and economic reasons and they may have a significant impact on the economy and finances of individual municipalities and also a whole country economy.
A system and method for online high-resolution mapping of gastric slow-wave activity.
Bull, Simon H; O'Grady, Gregory; Du, Peng; Cheng, Leo K
2014-11-01
High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed "off-line" (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for "online" HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application.
Energy Technology Data Exchange (ETDEWEB)
Lee, Dong Soo; Lee, Jae Sung; Kim, Kyeong Min; Chung, June Key; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)
1998-08-01
We investigated the statistical methods to compose the functional brain map of human working memory and the principal factors that have an effect on the methods for localization. Repeated PET scans with successive four tasks, which consist of one control and three different activation tasks, were performed on six right-handed normal volunteers for 2 minutes after bolus injections of 925 MBq H{sub 2}{sup 15}O at the intervals of 30 minutes. Image data were analyzed using SPM96 (Statistical Parametric Mapping) implemented with Matlab (Mathworks Inc., U.S.A.). Images from the same subject were spatially registered and were normalized using linear and nonlinear transformation methods. Significant difference between control and each activation state was estimated at every voxel based on the general linear model. Differences of global counts were removed using analysis of covariance (ANCOVA) with global activity as covariate. Using the mean and variance for each condition which was adjusted using ANCOVA, t-statistics was performed on every voxel. To interpret the results more easily, t-values were transformed to the standard Gaussian distribution (Z-score). All the subjects carried out the activation and control tests successfully. Average rate of correct answers was 95%. The numbers of activated blobs were 4 for verbal memory I, 9 for verbal memory II, 9 for visual memory, and 6 for conjunctive activation of these three tasks. The verbal working memory activates predominantly left-sided structures, and the visual memory activates the right hemisphere. We conclude that rCBF PET imaging and statistical parametric mapping method were useful in the localization of the brain regions for verbal and visual working memory.
A simple and inexpensive method for genomic restriction mapping analysis
International Nuclear Information System (INIS)
Huang, C.H.; Lam, V.M.S.; Tam, J.W.O.
1988-01-01
The Southern blotting procedure for the transfer of DNA fragments from agarose gels to nitrocellulose membranes has revolutionized nucleic acid detection methods, and it forms the cornerstone of research in molecular biology. Basically, the method involves the denaturation of DNA fragments that have been separated on an agarose gel, the immobilization of the fragments by transfer to a nitrocellulose membrane, and the identification of the fragments of interest through hybridization to /sup 32/P-labeled probes and autoradiography. While the method is sensitive and applicable to both genomic and cloned DNA, it suffers from the disadvantages of being time consuming and expensive, and fragments of greater than 15 kb are difficult to transfer. Moreover, although theoretically the nitrocellulose membrane can be washed and hybridized repeatedly using different probes, in practice, the membrane becomes brittle and difficult to handle after a few cycles. A direct hybridization method for pure DNA clones was developed in 1975 but has not been widely exploited. The authors report here a modification of their procedure as applied to genomic DNA. The method is simple, rapid, and inexpensive, and it does not involve transfer to nitrocellulose membranes
A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map
Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng
2017-06-01
The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.
Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert
2015-12-08
Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.
Mapping research questions about translation to methods, measures, and models
Berninger, V.; Rijlaarsdam, G.; Fayol, M.L.; Fayol, M.; Alamargot, D.; Berninger, V.W.
2012-01-01
About the book: Translation of cognitive representations into written language is one of the most important processes in writing. This volume provides a long-awaited updated overview of the field. The contributors discuss each of the commonly used research methods for studying translation; theorize
Directory of Open Access Journals (Sweden)
Ciptianingsari Ayu Vitantri
2017-11-01
Full Text Available [Bahasa]: Penelitian ini bertujuan untuk mendeskripsikan penerapan, pemahaman konsep, dan respon mahasiswa terhadap pembelajaran CLM yang diintegrasikan dengan mind mapping pada mata kuliah aljabar linier elementer I. Penelitian ini termasuk dalam penelitian deskriptif kualitatif, dengan subjek penelitian adalah mahasiswa prodi matematika dan pendidikan matematika semester gasal tahun ajaran 2016/2017 yang mengambil mata kuliah aljabar linier elementer I. Instrumen utama dalam penelitian ini adalah peneliti sendiri dengan instrumen pendukung yaitu lembar observasi, tes pemahaman konsep, angket respon, dan pedoman wawancara. Hasil penelitian menunjukkan: 1 Langkah-langkah pembelajaran CLM yang diintegrasikan dengan mind mapping meliputi preview, participate, process (mengolah informasi dalam bentuk mind mapping, practice, dan produce. 2 Pemahaman konsep mahasiswa mengalami peningkatan setelah pembelajaran. Dan 3 Mahasiswa memberikan respon positif terhadap pelaksanaan pembelajaran CLM yang diintegrasikan dengan mind mapping. Kata kunci: Concise Learning Method; Mind Mapping; Pemahaman Konsep; Respon; Aljabar Linier Elementer. [English]: This research aimed to describe the implementation, students’ understanding and their responses on CLM integrated with mind mapping on Linear Elementary Algebra I course, This research was qualitative descriptive research with the subjects involved were students of mathematics and mathematics education on 2016/2017 academic year who took Linear Elementary Algebra I course. The main instrument in this research was the researcher and the supporting instruments used are observation sheet, test, response questionnaire, and interview guide. The results showed that: 1 The steps of CLM integrated with mind mapping include preview, participate, process (process all information into mind mapping, practice, and produce. 2 The students’ understanding of the mathematics concept of were developed. And 3 the students
Apparatus and method for nuclear magnetic resonance scanning and mapping
International Nuclear Information System (INIS)
Damadian, R.V.
1983-01-01
An improved apparatus and method is disclosed for analyzing the chemical and structural composition of a specimen including whole-body specimens which may include, for example, living mammals, utilizing nuclear magnetic resonance (NMR) techniques. A magnetic field space necessary to obtain an NMR signal characteristic of the chemical structure of the specimen is focused to provide a resonance domain of selectable size, which may then be moved in a pattern with respect to the specimen to scan the specimen
Energy Technology Data Exchange (ETDEWEB)
Jeon, Seong Woo; Cho, Jeong Keon; Jeong, Hwi Chol [Korea Environment Institute, Seoul (Korea)
2000-12-01
The map of ecology/nature in the amended Natural Environment Conservation Act is the necessary data, which is drawn through assessing the national land with ecological factors, to execute the Korea's environmental policy. Such important ecology/nature map should be continuously revised and improved the reliability with adding several new factors. In this point of view, this study has the significance in presenting the improvement scheme of ecology/nature map. 'A Study on Remote Probing Method for Drawing Ecology/Nature Map and the Application' that has been performed for 3 years since 1998 has researched the drawing method of subject maps that could be built in a short time - a land-covering classification map, a vegetation classification map, and a swamp classification map around river - and the promoting principles hereafter. This study also presented the possibility and limit of classification by several satellite image data, so it would be a big help to build the subject map in the Government level. The land-covering classification map, a result of the first year, has been already being built by Ministry of Environment as a national project, and the improvement scheme of the vegetation map that was presented as a result of second year has been used in building the basic ecology/nature map. We hope that the results from this study will be applied as basic data to draw an ecology/nature map and contribute to expanding the understanding on the usefulness of the several ecosystem analysis methods with applying an ecology/nature map and a remote probe. 55 refs., 38 figs., 24 tabs.
Basin boundaries and focal points in a map coming from Bairstow's method.
Gardini, Laura; Bischi, Gian-Italo; Fournier-Prunaret, Daniele
1999-06-01
This paper is devoted to the study of the global dynamical properties of a two-dimensional noninvertible map, with a denominator which can vanish, obtained by applying Bairstow's method to a cubic polynomial. It is shown that the complicated structure of the basins of attraction of the fixed points is due to the existence of singularities such as sets of nondefinition, focal points, and prefocal curves, which are specific to maps with a vanishing denominator, and have been recently introduced in the literature. Some global bifurcations that change the qualitative structure of the basin boundaries, are explained in terms of contacts among these singularities. The techniques used in this paper put in evidence some new dynamic behaviors and bifurcations, which are peculiar of maps with denominator; hence they can be applied to the analysis of other classes of maps coming from iterative algorithms (based on Newton's method, or others). (c) 1999 American Institute of Physics.
Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching
Directory of Open Access Journals (Sweden)
Tianyang Cao
2017-01-01
Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.
A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans
Directory of Open Access Journals (Sweden)
SUN Weixin
2016-06-01
Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.
Comparison of saturated areas mapping methods in the Jizera Mountains, Czech Republic
Czech Academy of Sciences Publication Activity Database
Kulasová, A.; Beven, K. J.; Blažková, Š. D.; Řezáčová, Daniela; Cajthaml, J.
2014-01-01
Roč. 62, č. 2 (2014), s. 160-168 ISSN 0042-790X R&D Projects: GA ČR(CZ) GAP209/11/2045 Institutional support: RVO:68378289 Keywords : mapping variable source areas * boot method * piezometers * vegetation mapping Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.486, year: 2014 http://147.213.145.2/vc_articles/2014_62_2_Kulasova_160.pdf
Directory of Open Access Journals (Sweden)
Chakkrid Klin-eam
2009-01-01
Full Text Available We prove strong convergence theorems for finding a common element of the zero point set of a maximal monotone operator and the fixed point set of a hemirelatively nonexpansive mapping in a Banach space by using monotone hybrid iteration method. By using these results, we obtain new convergence results for resolvents of maximal monotone operators and hemirelatively nonexpansive mappings in a Banach space.
Energy Technology Data Exchange (ETDEWEB)
Jun, Sung Woo; Chung, Sung Moon [Korea Environment Institute, Seoul (Korea)
1998-12-01
The drawing up of ecological and natural map, which is highly efficient using remote exploration method, was promoted in this study. As the first step of drawing up of ecological and natural map, this study is working on the drawing up of Land Cover using as a base map. Through the detailed and sufficient consideration on GAP analysis of USA, CORINE project of EU, and examples in Korea, it studied and proposed the Land Cover Classification system and method suitable for Korea. It will be helpful to draw up ecological and natural map by providing two strategies and principles for land cover classification. 26 refs., 33 figs., 9 tabs.
Directory of Open Access Journals (Sweden)
Aschengrau Ann
2006-06-01
Full Text Available Abstract Background Mapping spatial distributions of disease occurrence and risk can serve as a useful tool for identifying exposures of public health concern. Disease registry data are often mapped by town or county of diagnosis and contain limited data on covariates. These maps often possess poor spatial resolution, the potential for spatial confounding, and the inability to consider latency. Population-based case-control studies can provide detailed information on residential history and covariates. Results Generalized additive models (GAMs provide a useful framework for mapping point-based epidemiologic data. Smoothing on location while controlling for covariates produces adjusted maps. We generate maps of odds ratios using the entire study area as a reference. We smooth using a locally weighted regression smoother (loess, a method that combines the advantages of nearest neighbor and kernel methods. We choose an optimal degree of smoothing by minimizing Akaike's Information Criterion. We use a deviance-based test to assess the overall importance of location in the model and pointwise permutation tests to locate regions of significantly increased or decreased risk. The method is illustrated with synthetic data and data from a population-based case-control study, using S-Plus and ArcView software. Conclusion Our goal is to develop practical methods for mapping population-based case-control and cohort studies. The method described here performs well for our synthetic data, reproducing important features of the data and adequately controlling the covariate. When applied to the population-based case-control data set, the method suggests spatial confounding and identifies statistically significant areas of increased and decreased odds ratios.
A simple method for combining genetic mapping data from multiple crosses and experimental designs.
Directory of Open Access Journals (Sweden)
Jeremy L Peirce
Full Text Available BACKGROUND: Over the past decade many linkage studies have defined chromosomal intervals containing polymorphisms that modulate a variety of traits. Many phenotypes are now associated with enough mapping data that meta-analysis could help refine locations of known QTLs and detect many novel QTLs. METHODOLOGY/PRINCIPAL FINDINGS: We describe a simple approach to combining QTL mapping results for multiple studies and demonstrate its utility using two hippocampus weight loci. Using data taken from two populations, a recombinant inbred strain set and an advanced intercross population we demonstrate considerable improvements in significance and resolution for both loci. 1-LOD support intervals were improved 51% for Hipp1a and 37% for Hipp9a. We first generate locus-wise permuted P-values for association with the phenotype from multiple maps, which can be done using a permutation method appropriate to each population. These results are then assigned to defined physical positions by interpolation between markers with known physical and genetic positions. We then use Fisher's combination test to combine position-by-position probabilities among experiments. Finally, we calculate genome-wide combined P-values by generating locus-specific P-values for each permuted map for each experiment. These permuted maps are then sampled with replacement and combined. The distribution of best locus-specific P-values for each combined map is the null distribution of genome-wide adjusted P-values. CONCLUSIONS/SIGNIFICANCE: Our approach is applicable to a wide variety of segregating and non-segregating mapping populations, facilitates rapid refinement of physical QTL position, is complementary to other QTL fine mapping methods, and provides an appropriate genome-wide criterion of significance for combined mapping results.
Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method
Nugraha, A. L.; Awaluddin, M.; Sasmito, B.
2018-02-01
One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.
Inverse problems for ODEs using contraction maps and suboptimality of the 'collage method'
Kunze, H. E.; Hicken, J. E.; Vrscay, E. R.
2004-06-01
Broad classes of inverse problems in differential and integral equations can be cast in the following framework: the optimal approximation of a target x of a suitable metric space X by the fixed point \\bar x of a contraction map T on X. The 'collage method' attempts to solve such inverse problems by finding an operator Tc that maps the target x as close as possible to itself. In the case of ODEs, the appropriate contraction maps are integral Picard operators. In practice, the target solutions possibly arise from an interpolation of experimental data points. In this paper, we investigate the suboptimality of the collage method. A simple inequality that provides upper bounds on the improvement over collage coding is presented and some examples are studied. We conclude that, at worst, the collage method provides an excellent starting point for further optimization, in contrast to more traditional searching methods that must first select a good starting point.
Combining Semantic and Lexical Methods for Mapping MedDRA to VCM Icons.
Lamy, Jean-Baptiste; Tsopra, Rosy
2018-01-01
VCM (Visualization of Concept in Medicine) is an iconic language that represents medical concepts, such as disorders, by icons. VCM has a formal semantics described by an ontology. The icons can be used in medical software for providing a visual summary or enriching texts. However, the use of VCM icons in user interfaces requires to map standard medical terminologies to VCM. Here, we present a method combining semantic and lexical approaches for mapping MedDRA to VCM. The method takes advantage of the hierarchical relations in MedDRA. It also analyzes the groups of lemmas in the term's labels, and relies on a manual mapping of these groups to the concepts in the VCM ontology. We evaluate the method on 50 terms. Finally, we discuss the method and suggest perspectives.
Surface Design Based on Discrete Conformal Transformations
Duque, Carlos; Santangelo, Christian; Vouga, Etienne
Conformal transformations are angle-preserving maps from one domain to another. Although angles are preserved, the lengths between arbitrary points are not generally conserved. As a consequence there is always a given amount of distortion associated to any conformal map. Different uses of such transformations can be found in various fields, but have been used by us to program non-uniformly swellable gel sheets to buckle into prescribed three dimensional shapes. In this work we apply circle packings as a kind of discrete conformal map in order to find conformal maps from the sphere to the plane that can be used as nearly uniform swelling patterns to program non-Euclidean sheets to buckle into spheres. We explore the possibility of tuning the area distortion to fit the experimental range of minimum and maximum swelling by modifying the boundary of the planar domain through the introduction of different cutting schemes.
Directory of Open Access Journals (Sweden)
Guizhou Wang
2013-01-01
Full Text Available This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine. Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy.
International Nuclear Information System (INIS)
Fradkin, E.S.; Palchik, M.Ya.
1996-02-01
We study a family of exactly solvable models of conformally-invariant quantum field theory in D-dimensional space. We demonstrate the existence of D-dimensional analogs of primary and secondary fields. Under the action of energy-momentum tensor and conserved currents, the primary fields creates an infinite set of (tensor) secondary fields of different generations. The commutators of secondary fields with zero components of current and energy-momentum tensor include anomalous operator terms. We show that the Hilbert space of conformal theory has a special sector which structure is solely defined by the Ward identities independently on the choice of dynamical model. The states of this sector are constructed from secondary fields. Definite self-consistent conditions on the states of the latter sector fix the choice of the field model uniquely. In particular, Lagrangian models do belong to this class of models. The above self-consistent conditions are formulated as follows. Special superpositions Q s , s = 1,2,... of secondary fields are constructed. Each superposition is determined by the requirement that the form of its commutators with energy-momentum tensor and current (i.e. transformation properties) should be identical to that of a primary field. Each equation Q s (x) = 0 is consistent, and defines an exactly solvable model for D ≥ 3. The structure of these models are analogous to that of well-known two dimensional conformal models. The states Q s (x) modul 0> are analogous to the null-vectors of two dimensional theory. In each of these models one can obtain a closed set of differential equations for all the higher Green functions, as well as algebraic equations relating the scale dimension of fundamental field to the D-dimensional analog of a central charge. As an example, we present a detailed discussion of a pair of exactly solvable models in even-dimensional space D ≥ 4. (author). 28 refs
Yaşar, Elif; Yıldırım, Yakup; Yaşar, Emrullah
2018-06-01
This paper devotes to conformable fractional space-time perturbed Gerdjikov-Ivanov (GI) equation which appears in nonlinear fiber optics and photonic crystal fibers (PCF). We consider the model with full nonlinearity in order to give a generalized flavor. The sine-Gordon equation approach is carried out to model equation for retrieving the dark, bright, dark-bright, singular and combined singular optical solitons. The constraint conditions are also reported for guaranteeing the existence of these solitons. We also present some graphical simulations of the solutions for better understanding the physical phenomena of the behind the considered model.
Concept Map Technique as a New Method for Whole Text Translation
Krishan, Tamara Mohd Altabieri
2017-01-01
This study discusses the use of concept map tool as a new method for teaching translation (from English language to Arabic language). This study comprised 80 students divided into two groups. The first group was taught the new vocabulary by using the concept tool method, whereas the second group was taught the new vocabulary by the traditional…
Smith, Richard; Miller, Kirstin
2013-01-01
Assessing neighborhood vitality is important to understanding how to improve quality of life and health outcomes. The ecocity model recognizes that cities are part of natural systems and favors walkable neighborhoods. This article introduces ecocity mapping, an innovative planning method, to the public health literature on community engagement by describing a pilot project with a new affordable housing development in Oakland, California between 2007 and 2009. Although ecocity mapping began as a paper technology, advances in geographic information systems (GIS) moved it forward. This article describes how Ecocity Builders used GIS to conduct ecocity mapping to (1) assess vitality of neighborhoods and urban centers to prioritize community health intervention pilot sites and (2) create scenario maps for use in community health planning. From fall 2007 to summer 2008, Ecocity Builders assessed neighborhood vitality using walking distance from parks, schools, rapid transit stops, grocery stores, and retail outlets. In 2008, ecocity maps were shared with residents to create a neighborhood health and sustainability plan. In 2009, Ecocity Builders developed scenario maps to show how changes to the built environment would improve air quality by reducing greenhouse gas emissions from vehicles, while increasing access to basic services and natural amenities. Community organizing with GIS was more useful than GIS alone for final site selection. GIS was useful in mapping scenarios after residents shared local neighborhood knowledge and ideas for change. Residents were interested in long-term environmental planning, provided they could meet immediate needs.
A perturbation method to the tent map based on Lyapunov exponent and its application
Cao, Lv-Chen; Luo, Yu-Ling; Qiu, Sen-Hui; Liu, Jun-Xiu
2015-10-01
Perturbation imposed on a chaos system is an effective way to maintain its chaotic features. A novel parameter perturbation method for the tent map based on the Lyapunov exponent is proposed in this paper. The pseudo-random sequence generated by the tent map is sent to another chaos function — the Chebyshev map for the post processing. If the output value of the Chebyshev map falls into a certain range, it will be sent back to replace the parameter of the tent map. As a result, the parameter of the tent map keeps changing dynamically. The statistical analysis and experimental results prove that the disturbed tent map has a highly random distribution and achieves good cryptographic properties of a pseudo-random sequence. As a result, it weakens the phenomenon of strong correlation caused by the finite precision and effectively compensates for the digital chaos system dynamics degradation. Project supported by the Guangxi Provincial Natural Science Foundation, China (Grant No. 2014GXNSFBA118271), the Research Project of Guangxi University, China (Grant No. ZD2014022), the Fund from Guangxi Provincial Key Laboratory of Multi-source Information Mining & Security, China (Grant No. MIMS14-04), the Fund from the Guangxi Provincial Key Laboratory of Wireless Wideband Communication & Signal Processing, China (Grant No. GXKL0614205), the Education Development Foundation and the Doctoral Research Foundation of Guangxi Normal University, the State Scholarship Fund of China Scholarship Council (Grant No. [2014]3012), and the Innovation Project of Guangxi Graduate Education, China (Grant No. YCSZ2015102).
2015-01-01
Single molecule fluorescence spectroscopy holds the promise of providing direct measurements of protein folding free energy landscapes and conformational motions. However, fulfilling this promise has been prevented by technical limitations, most notably, the difficulty in analyzing the small packets of photons per millisecond that are typically recorded from individual biomolecules. Such limitation impairs the ability to accurately determine conformational distributions and resolve sub-millisecond processes. Here we develop an analytical procedure for extracting the conformational distribution and dynamics of fast-folding proteins directly from time-stamped photon arrival trajectories produced by single molecule FRET experiments. Our procedure combines the maximum likelihood analysis originally developed by Gopich and Szabo with a statistical mechanical model that describes protein folding as diffusion on a one-dimensional free energy surface. Using stochastic kinetic simulations, we thoroughly tested the performance of the method in identifying diverse fast-folding scenarios, ranging from two-state to one-state downhill folding, as a function of relevant experimental variables such as photon count rate, amount of input data, and background noise. The tests demonstrate that the analysis can accurately retrieve the original one-dimensional free energy surface and microsecond folding dynamics in spite of the sub-megahertz photon count rates and significant background noise levels of current single molecule fluorescence experiments. Therefore, our approach provides a powerful tool for the quantitative analysis of single molecule FRET experiments of fast protein folding that is also potentially extensible to the analysis of any other biomolecular process governed by sub-millisecond conformational dynamics. PMID:25988351
Canonical integration and analysis of periodic maps using non-standard analysis and life methods
Energy Technology Data Exchange (ETDEWEB)
Forest, E.; Berz, M.
1988-06-01
We describe a method and a way of thinking which is ideally suited for the study of systems represented by canonical integrators. Starting with the continuous description provided by the Hamiltonians, we replace it by a succession of preferably canonical maps. The power series representation of these maps can be extracted with a computer implementation of the tools of Non-Standard Analysis and analyzed by the same tools. For a nearly integrable system, we can define a Floquet ring in a way consistent with our needs. Using the finite time maps, the Floquet ring is defined only at the locations s/sub i/ where one perturbs or observes the phase space. At most the total number of locations is equal to the total number of steps of our integrator. We can also produce pseudo-Hamiltonians which describe the motion induced by these maps. 15 refs., 1 fig.
DIGITAL GEOECOLOGICAL MAPS AND SEVERAL METHODS OF ITS CONSTRUCTING IN A GIS ARCGIS
Directory of Open Access Journals (Sweden)
S. V. Lebedev
2015-01-01
Full Text Available This article is focused on methods of digital geoecological maps construction with using GIS ArcGIS technologies. The technique of GIS ArcGIS mapping is illustrated by examples of GIS maps of radioactive and chemical pollution in the snow cover on the territory of St. Petersburg’s (Russia. The geostatistic and deterministic approaches were applied for interpolation of input data. The input data were presented by the coordinates of points located on territory according to the scheme of measurements. The most optimal amount of classification intervals describing the natural processes and the phenomena of all-over distribution on the geoeclogical GIS maps is the 3–5 intervals of the parameter that will be considered. The borders of classes of intervals are set in depend on existing normative of pollution in different components of environment and empirical character of study parameter distribution on territory under consideration.
A General Method for QTL Mapping in Multiple Related Populations Derived from Multiple Parents
Directory of Open Access Journals (Sweden)
Yan AO
2009-03-01
Full Text Available It's well known that incorporating some existing populations derived from multiple parents may improve QTL mapping and QTL-based breeding programs. However, no general maximum likelihood method has been available for this strategy. Based on the QTL mapping in multiple related populations derived from two parents, a maximum likelihood estimation method was proposed, which can incorporate several populations derived from three or more parents and also can be used to handle different mating designs. Taking a circle design as an example, we conducted simulation studies to study the effect of QTL heritability and sample size upon the proposed method. The results showed that under the same heritability, enhanced power of QTL detection and more precise and accurate estimation of parameters could be obtained when three F2 populations were jointly analyzed, compared with the joint analysis of any two F2 populations. Higher heritability, especially with larger sample sizes, would increase the ability of QTL detection and improve the estimation of parameters. Potential advantages of the method are as follows: firstly, the existing results of QTL mapping in single population can be compared and integrated with each other with the proposed method, therefore the ability of QTL detection and precision of QTL mapping can be improved. Secondly, owing to multiple alleles in multiple parents, the method can exploit gene resource more adequately, which will lay an important genetic groundwork for plant improvement.
Directory of Open Access Journals (Sweden)
Eduardo Garza-Gisholt
Full Text Available Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
Assessment of Three Flood Hazard Mapping Methods: A Case Study of Perlis
Azizat, Nazirah; Omar, Wan Mohd Sabki Wan
2018-03-01
Flood is a common natural disaster and also affect the all state in Malaysia. Regarding to Drainage and Irrigation Department (DID) in 2007, about 29, 270 km2 or 9 percent of region of the country is prone to flooding. Flood can be such devastating catastrophic which can effected to people, economy and environment. Flood hazard mapping can be used is an important part in flood assessment to define those high risk area prone to flooding. The purposes of this study are to prepare a flood hazard mapping in Perlis and to evaluate flood hazard using frequency ratio, statistical index and Poisson method. The six factors affecting the occurrence of flood including elevation, distance from the drainage network, rainfall, soil texture, geology and erosion were created using ArcGIS 10.1 software. Flood location map in this study has been generated based on flooded area in year 2010 from DID. These parameters and flood location map were analysed to prepare flood hazard mapping in representing the probability of flood area. The results of the analysis were verified using flood location data in year 2013, 2014, 2015. The comparison result showed statistical index method is better in prediction of flood area rather than frequency ratio and Poisson method.
Yavuz, Sevtap Caglar; Sabanci, Nazmiye; Saripinar, Emin
2018-01-01
The EC-GA method was employed in this study as a 4D-QSAR method, for the identification of the pharmacophore (Pha) of ruthenium(II) arene complex derivatives and quantitative prediction of activity. The arrangement of the computed geometric and electronic parameters for atoms and bonds of each compound occurring in a matrix is known as the electron-conformational matrix of congruity (ECMC). It contains the data from HF/3-21G level calculations. Compounds were represented by a group of conformers for each compound rather than a single conformation, known as fourth dimension to generate the model. ECMCs were compared within a certain range of tolerance values by using the EMRE program and the responsible pharmacophore group for ruthenium(II) arene complex derivatives was found. For selecting the sub-parameter which had the most effect on activity in the series and the calculation of theoretical activity values, the non-linear least square method and genetic algorithm which are included in the EMRE program were used. In addition, compounds were classified as the training and test set and the accuracy of the models was tested by cross-validation statistically. The model for training and test sets attained by the optimum 10 parameters gave highly satisfactory results with R2 training= 0.817, q 2=0.718 and SEtraining=0.066, q2 ext1 = 0.867, q2 ext2 = 0.849, q2 ext3 =0.895, ccctr = 0.895, ccctest = 0.930 and cccall = 0.905. Since there is no 4D-QSAR research on metal based organic complexes in the literature, this study is original and gives a powerful tool to the design of novel and selective ruthenium(II) arene complexes. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
International Nuclear Information System (INIS)
Sulimova, G.E.; Kompanijtsev, A.A.; Mojsyak, E.V.; Rakhmanaliev, Eh.R.; Klimov, E.A.; Udina, I.G.; Zakharov, I.A.
2000-01-01
Radiation hybrid mapping (RH mapping) is considered as one of the main method of constructing physical maps of mammalian genomes. In introduction, theoretical prerequisites of developing of the RH mapping and statistical methods of data analysis are discussed. Comparative characteristics of universal commercial panels of the radiation hybrid somatic cells (RH panels) are shown. In experimental part of the work, RH mapping is used to localize nucleotide sequences adjacent to Not I sites of human chromosome 3 with the aim to integrate contig map of Nor I clones to comprehensive maps of human genome. Five nucleotide sequences adjacent to the sites of integration of papilloma virus in human genome and expressed in the cells of cervical cancer involved localized. It is demonstrated that the region 13q14.3-q21.1 was enriched with nucleotide sequences involved in the processes of carcinogenesis. RH mapping can be considered as one of the most perspective applications of modern radiation biology in the field of molecular genetics, that is, in constructing physical maps of mammalian genomes with high resolution level [ru
An information hiding method based on LSB and tent chaotic map
Song, Jianhua; Ding, Qun
2011-06-01
In order to protect information security more effectively, a novel information hiding method based on LSB and Tent chaotic map was proposed, first the secret message is Tent chaotic encrypted, and then LSB steganography is executed for the encrypted message in the cover-image. Compared to the traditional image information hiding method, the simulation results indicate that the method greatly improved in imperceptibility and security, and acquired good results.
Tomaschitz, R
2000-01-01
We study tachyons conformally coupled to the background geometry of a Milne universe. The causality of superluminal signal transfer is scrutinized in this context. The cosmic time of the comoving frame determines a distinguished time order for events connected by superluminal signals. An observer can relate his rest frame to the galaxy frame, and compare so the time order of events in his proper time to the cosmic time order. All observers can in this way arrive at identical conclusions on the causality of events connected by superluminal signals. An unambiguous energy concept for tachyonic rays is defined by means of the cosmic time of the comoving reference frame, without resorting to an antiparticle interpretation. On that basis we give an explicit proof that no signals can be sent into the past of observers. Causality violating signals are energetically forbidden, as they would have negative energy in the rest frame of the emitting observer. If an observer emits a superluminal signal, the tachyonic respon...
International Nuclear Information System (INIS)
Rinkel, L.J.; Altona, C.
1987-01-01
A graphical method is presented for the conformational analysis of the sugar ring in DNA fragments by means of proton-proton couplings. The coupling data required for this analysis consist of sums of couplings, which are referred to as sigma 1' (= J1'2' + J1'2''), sigma 2' (= J1'2' + J2'3' + J2'2''), sigma 2'' (= J1'2'' + J2''3' + J2'2'') and sigma 3' (= J2'3' + J2''3' + J3'4'). These sums of couplings correspond to the distance between the outer peaks of the H1', H2', H2'' and H3' [31P] resonances, respectively, (except for sigma 2' and sigma 2'' in the case of a small chemical shift difference between the H2' and H2'' resonances) and can often be obtained from 1H-NMR spectra via first-order measurement, obviating the necessity of a computer-assisted simulation of the fine structure of these resonances. Two different types of graphs for the interpretation of the coupling data are discussed: the first type of graph serves to probe as to whether or not the sugar ring occurs as a single conformer, and if so to analyze the coupling data in terms of the geometry of this sugar ring. In cases where the sugar ring does not occur as a single conformer, but as a blend of N- and S-type sugar puckers, the second type of graph is used to analyze the coupling data in terms of the geometry and population of the most abundant form. It is shown that the latter type of analysis can be carried out on the basis of experimental values for merely sigma 1',sigma 2' and sigma 2'', without any assumptions or restrictions concerning a relation between the geometry of the N- and S-type conformer. In addition, the question is discussed as to how insight can be gained into the conformational purity of the sugar ring from the observed fine structure of the H1' resonance
International Nuclear Information System (INIS)
Hautot, F.; Dubart, P.; Chagneau, B.; Bacri, C.O.; Abou-Khalil, R.
2017-01-01
New developments in the field of robotics and computer vision enable to merge sensors to allow fast real-time localization of radiological measurements in the space/volume with near real-time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarios and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations. This paper will present new progresses in merging RGB-D camera based on SLAM (Simultaneous Localization and Mapping) systems and nuclear measurement in motion methods in order to detect, locate, and evaluate the activity of radioactive sources in 3-dimensions
Superintegrability of d-dimensional conformal blocks
International Nuclear Information System (INIS)
Isachenkov, Mikhail
2016-02-01
We observe that conformal blocks of scalar 4-point functions in a d-dimensional conformal field theory can mapped to eigenfunctions of a 2-particle hyperbolic Calogero-Sutherland Hamiltonian. The latter describes two coupled Poeschl-Teller particles. Their interaction, whose strength depends smoothly on the dimension d, is known to be superintegrable. Our observation enables us to exploit the rich mathematical literature on Calogero-Sutherland models in deriving various results for conformal field theory. These include an explicit construction of conformal blocks in terms of Heckman-Opdam hypergeometric functions and a remarkable duality that relates the blocks of theories in different dimensions.
Superintegrability of d-dimensional conformal blocks
Energy Technology Data Exchange (ETDEWEB)
Isachenkov, Mikhail [Weizmann Institute of Science, Rehovot (Israel). Dept. of Particle Physics and Astronomy; Schomerus, Volker [DESY Theory Group, Hamburg (Germany)
2016-02-15
We observe that conformal blocks of scalar 4-point functions in a d-dimensional conformal field theory can mapped to eigenfunctions of a 2-particle hyperbolic Calogero-Sutherland Hamiltonian. The latter describes two coupled Poeschl-Teller particles. Their interaction, whose strength depends smoothly on the dimension d, is known to be superintegrable. Our observation enables us to exploit the rich mathematical literature on Calogero-Sutherland models in deriving various results for conformal field theory. These include an explicit construction of conformal blocks in terms of Heckman-Opdam hypergeometric functions and a remarkable duality that relates the blocks of theories in different dimensions.
Method of Automatic Ontology Mapping through Machine Learning and Logic Mining
Institute of Scientific and Technical Information of China (English)
王英林
2004-01-01
Ontology mapping is the bottleneck of handling conflicts among heterogeneous ontologies and of implementing reconfiguration or interoperability of legacy systems. We proposed an ontology mapping method by using machine learning, type constraints and logic mining techniques. This method is able to find concept correspondences through instances and the result is optimized by using an error function; it is able to find attribute correspondence between two equivalent concepts and the mapping accuracy is enhanced by combining together instances learning, type constraints and the logic relations that are imbedded in instances; moreover, it solves the most common kind of categorization conflicts. We then proposed a merging algorithm to generate the shared ontology and proposed a reconfigurable architecture for interoperation based on multi agents. The legacy systems are encapsulated as information agents to participate in the integration system. Finally we give a simplified case study.
Real-time method for establishing a detection map for a network of sensors
Nguyen, Hung D; Koch, Mark W; Giron, Casey; Rondeau, Daniel M; Russell, John L
2012-09-11
A method for establishing a detection map of a dynamically configurable sensor network. This method determines an appropriate set of locations for a plurality of sensor units of a sensor network and establishes a detection map for the network of sensors while the network is being set up; the detection map includes the effects of the local terrain and individual sensor performance. Sensor performance is characterized during the placement of the sensor units, which enables dynamic adjustment or reconfiguration of the placement of individual elements of the sensor network during network set-up to accommodate variations in local terrain and individual sensor performance. The reconfiguration of the network during initial set-up to accommodate deviations from idealized individual sensor detection zones improves the effectiveness of the sensor network in detecting activities at a detection perimeter and can provide the desired sensor coverage of an area while minimizing unintentional gaps in coverage.
A possible method of carbon deposit mapping on plasma facing components using infrared thermography
International Nuclear Information System (INIS)
Mitteau, R.; Spruytte, J.; Vallet, S.; Travere, J.M.; Guilhem, D.; Brosset, C.
2007-01-01
The material eroded from the surface of plasma facing components is redeposited partly close to high heat flux areas. At these locations, the deposit is heated by the plasma and the deposition pattern evolves depending on the operation parameters. The mapping of the deposit is still a matter of intense scientific activity, especially during the course of experimental campaigns. A method based on the comparison of surface temperature maps, obtained in situ by infrared cameras and by theoretical modelling is proposed. The difference between the two is attributed to the thermal resistance added by deposited material, and expressed as a deposit thickness. The method benefits of elaborated imaging techniques such as possibility theory and fuzzy logics. The results are consistent with deposit maps obtained by visual inspection during shutdowns
Cerezo, Javier; Aranda, Daniel; Avila Ferrer, Francisco J; Prampolini, Giacomo; Mazzeo, Giuseppe; Longhi, Giovanna; Abbate, Sergio; Santoro, Fabrizio
2018-06-01
We extend a recently proposed mixed quantum/classical method for computing the vibronic electronic circular dichroism (ECD) spectrum of molecules with different conformers, to cases where more than one hindered rotation is present. The method generalizes the standard procedure, based on the simple Boltzmann average of the vibronic spectra of the stable conformers, and includes the contribution of structures that sample all the accessible conformational space. It is applied to the simulation of the ECD spectrum of (S)-2,2,2-trifluoroanthrylethanol, a molecule with easily interconvertible conformers, whose spectrum exhibits a pattern of alternating positive and negative vibronic peaks. Results are in very good agreement with experiment and show that spectra averaged over all the sampled conformational space can deviate significantly from the simple average of the contributions of the stable conformers. The present mixed quantum/classical method is able to capture the effect of the nonlinear dependence of the rotatory strength on the molecular structure and of the anharmonic couplings among the modes responsible for molecular flexibility. Despite its computational cost, the procedure is still affordable and promises to be useful in all cases where the ECD shape arises from a subtle balance between vibronic effects and conformational variety. © 2018 Wiley Periodicals, Inc.
Yates, Katherine L; Schoeman, David S
2013-01-01
Spatial management tools, such as marine spatial planning and marine protected areas, are playing an increasingly important role in attempts to improve marine management and accommodate conflicting needs. Robust data are needed to inform decisions among different planning options, and early inclusion of stakeholder involvement is widely regarded as vital for success. One of the biggest stakeholder groups, and the most likely to be adversely impacted by spatial restrictions, is the fishing community. In order to take their priorities into account, planners need to understand spatial variation in their perceived value of the sea. Here a readily accessible, novel method for quantitatively mapping fishers' spatial access priorities is presented. Spatial access priority mapping, or SAPM, uses only basic functions of standard spreadsheet and GIS software. Unlike the use of remote-sensing data, SAPM actively engages fishers in participatory mapping, documenting rather than inferring their priorities. By so doing, SAPM also facilitates the gathering of other useful data, such as local ecological knowledge. The method was tested and validated in Northern Ireland, where over 100 fishers participated in a semi-structured questionnaire and mapping exercise. The response rate was excellent, 97%, demonstrating fishers' willingness to be involved. The resultant maps are easily accessible and instantly informative, providing a very clear visual indication of which areas are most important for the fishers. The maps also provide quantitative data, which can be used to analyse the relative impact of different management options on the fishing industry and can be incorporated into planning software, such as MARXAN, to ensure that conservation goals can be met at minimum negative impact to the industry. This research shows how spatial access priority mapping can facilitate the early engagement of fishers and the ready incorporation of their priorities into the decision-making process
a Method for Simultaneous Aerial and Terrestrial Geodata Acquisition for Corridor Mapping
Molina, P.; Blázquez, M.; Sastre, J.; Colomina, I.
2015-08-01
In this paper, we present mapKITE, a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method. On one side, the method combines a terrestrial mobile mapping system (TMMS) with an unmanned aerial mapping one, both equipped with remote sensing payloads (at least, a nadir-looking visible-band camera in the UA) by means of which aerial and terrestrial geodata are acquired simultaneously. This tandem geodata acquisition system is based on a terrestrial vehicle (TV) and on an unmanned aircraft (UA) linked by a 'virtual tether', that is, a mechanism based on the real-time supply of UA waypoints by the TV. By means of the TV-to-UA tether, the UA follows the TV keeping a specific relative TV-to-UA spatial configuration enabling the simultaneous operation of both systems to obtain highly redundant and complementary geodata. On the other side, mapKITE presents a novel concept for geodata post-processing favoured by the rich geometrical aspects derived from the mapKITE tandem simultaneous operation. The approach followed for sensor orientation and calibration of the aerial images captured by the UA inherits the principles of Integrated Sensor Orientation (ISO) and adds the pointing-and-scaling photogrammetric measurement of a distinctive element observed in every UA image, which is a coded target mounted on the roof of the TV. By means of the TV navigation system, the orientation of the TV coded target is performed and used in the post-processing UA image orientation approach as a Kinematic Ground Control Point (KGCP). The geometric strength of a mapKITE ISO network is therefore high as it counts with the traditional tie point image measurements, static ground control points, kinematic aerial control and the new point-and-scale measurements of the KGCPs. With such a geometry, reliable system and sensor orientation and calibration and eventual further reduction of the number of traditional ground control points is feasible. The different
Germaine, Stephen S.; O'Donnell, Michael S.; Aldridge, Cameron L.; Baer, Lori; Fancher, Tammy; McBeth, Jamie; McDougal, Robert R.; Waltermire, Robert; Bowen, Zachary H.; Diffendorfer, James; Garman, Steven; Hanson, Leanne
2012-01-01
We evaluated how well three leading information-extraction software programs (eCognition, Feature Analyst, Feature Extraction) and manual hand digitization interpreted information from remotely sensed imagery of a visually complex gas field in Wyoming. Specifically, we compared how each mapped the area of and classified the disturbance features present on each of three remotely sensed images, including 30-meter-resolution Landsat, 10-meter-resolution SPOT (Satellite Pour l'Observation de la Terre), and 0.6-meter resolution pan-sharpened QuickBird scenes. Feature Extraction mapped the spatial area of disturbance features most accurately on the Landsat and QuickBird imagery, while hand digitization was most accurate on the SPOT imagery. Footprint non-overlap error was smallest on the Feature Analyst map of the Landsat imagery, the hand digitization map of the SPOT imagery, and the Feature Extraction map of the QuickBird imagery. When evaluating feature classification success against a set of ground-truthed control points, Feature Analyst, Feature Extraction, and hand digitization classified features with similar success on the QuickBird and SPOT imagery, while eCognition classified features poorly relative to the other methods. All maps derived from Landsat imagery classified disturbance features poorly. Using the hand digitized QuickBird data as a reference and making pixel-by-pixel comparisons, Feature Extraction classified features best overall on the QuickBird imagery, and Feature Analyst classified features best overall on the SPOT and Landsat imagery. Based on the entire suite of tasks we evaluated, Feature Extraction performed best overall on the Landsat and QuickBird imagery, while hand digitization performed best overall on the SPOT imagery, and eCognition performed worst overall on all three images. Error rates for both area measurements and feature classification were prohibitively high on Landsat imagery, while QuickBird was time and cost prohibitive for
Optimal design method to minimize users' thinking mapping load in human-machine interactions.
Huang, Yanqun; Li, Xu; Zhang, Jie
2015-01-01
The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.
Conformal field theory in conformal space
International Nuclear Information System (INIS)
Preitschopf, C.R.; Vasiliev, M.A.
1999-01-01
We present a new framework for a Lagrangian description of conformal field theories in various dimensions based on a local version of d + 2-dimensional conformal space. The results include a true gauge theory of conformal gravity in d = (1, 3) and any standard matter coupled to it. An important feature is the automatic derivation of the conformal gravity constraints, which are necessary for the analysis of the matter systems
Interactive overlays: a new method for generating global journal maps from Web-of-Science data
Leydesdorff, L.; Rafols, I.
2012-01-01
Recent advances in methods and techniques enable us to develop interactive overlays to a global map of science based on aggregated citation relations among the 9162 journals contained in the Science Citation Index and Social Science Citation Index 2009. We first discuss the pros and cons of the
Soliton-like solutions to the GKdV equation by extended mapping method
International Nuclear Information System (INIS)
Wu Ranchao; Sun Jianhua
2007-01-01
In this note, many new exact solutions of the generalized KdV equation, such as rational solutions, periodic solutions like Jacobian elliptic and triangular functions, soliton-like solutions, are constructed by symbolic computation and the extended mapping method, with the auxiliary ordinary equation replaced by a more general one
Spectral analysis of charcoal on soils: Implications for wildland fire severity mapping methods
Alistair M. S. Smith; Jan U. H. Eitel; Andrew T. Hudak
2010-01-01
Recent studies in the Western United States have supported climate scenarios that predict a higher occurrence of large and severe wildfires. Knowledge of the severity is important to infer long-term biogeochemical, ecological, and societal impacts, but understanding the sensitivity of any severity mapping method to variations in soil type and increasing charcoal (char...
DEFF Research Database (Denmark)
Svendsen, Casper Steinmann; Jensen, Jan; Fedorov, Dmitri
2013-01-01
We extend the Effective Fragment Molecular Orbital (EFMO) method to the frozen domain approach where only the geometry of an active part is optimized, while the many-body polarization effects are considered for the whole system. The new approach efficiently mapped out the entire reaction path of ...
Directory of Open Access Journals (Sweden)
Shunji Natsuka
Full Text Available Glycan Atlas is a set of glycan maps over the whole body of an organism. The glycan map that includes data of glycan structure and quantity displays micro-heterogeneity of the glycans in a tissue, an organ, or cells. The two-dimensional glycan mapping is widely used for structure analysis of N-linked oligosaccharides on glycoproteins. In this study we developed a comprehensive method for the mapping of both N- and O-glycans with and without sialic acid. The mapping data of 150 standard pyridylaminated glycans were collected. The empirical additivity rule which was proposed in former reports was able to adapt for this extended glycan map. The adapted rule is that the elution time of pyridylamino glycans on high performance liquid chromatography (HPLC is expected to be the simple sum of the partial elution times assigned to each monosaccharide residue. The comprehensive mapping method developed in this study is a powerful tool for describing the micro-heterogeneity of the glycans. Furthermore, we prepared 42 pyridylamino (PA- glycans from human serum and were able to draw the map of human serum N- and O-glycans as an initial step of Glycan Atlas editing.
Optical method for mapping the transverse phase space of a charged particle beam
International Nuclear Information System (INIS)
Fiorito, R.B.; Shkvarunets, A.G.; O'Shea, P.G.
2002-01-01
We are developing an all optical method to map the transverse phase space map of a charged particle beam. Our technique employs OTR interferometry (OTRI) in combination with a scanning pinhole to make local orthogonal (x,y) divergence and trajectory angle measurements as function of position within the transverse profile of the beam. The localized data allows a reconstruction of the horizontal and vertical phase spaces of the beam. We have also demonstrated how single and multiple pinholes can in principle be used to make such measurements simultaneously
Sych, Robert; Nakariakov, Valery; Anfinogentov, Sergey
Wavelet analysis is suitable for investigating waves and oscillating in solar atmosphere, which are limited in both time and frequency. We have developed an algorithms to detect this waves by use the Pixelize Wavelet Filtration (PWF-method). This method allows to obtain information about the presence of propagating and non-propagating waves in the data observation (cube images), and localize them precisely in time as well in space. We tested the algorithm and found that the results of coronal waves detection are consistent with those obtained by visual inspection. For fast exploration of the data cube, in addition, we applied early-developed Period- Map analysis. This method based on the Fast Fourier Transform and allows on initial stage quickly to look for "hot" regions with the peak harmonic oscillations and determine spatial distribution at the significant harmonics. We propose the detection procedure of coronal waves separate on two parts: at the first part, we apply the PeriodMap analysis (fast preparation) and than, at the second part, use information about spatial distribution of oscillation sources to apply the PWF-method (slow preparation). There are two possible algorithms working with the data: in automatic and hands-on operation mode. Firstly we use multiply PWF analysis as a preparation narrowband maps at frequency subbands multiply two and/or harmonic PWF analysis for separate harmonics in a spectrum. Secondly we manually select necessary spectral subband and temporal interval and than construct narrowband maps. For practical implementation of the proposed methods, we have developed the remote data processing system at Institute of Solar-Terrestrial Physics, Irkutsk. The system based on the data processing server - http://pwf.iszf.irk.ru. The main aim of this resource is calculation in remote access through the local and/or global network (Internet) narrowband maps of wave's sources both in whole spectral band and at significant harmonics. In addition
Serag, Maged F.
2014-10-06
Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.
Serag, Maged F.; Abadi, Maram; Habuchi, Satoshi
2014-01-01
Single-molecule localization and tracking has been used to translate spatiotemporal information of individual molecules to map their diffusion behaviours. However, accurate analysis of diffusion behaviours and including other parameters, such as the conformation and size of molecules, remain as limitations to the method. Here, we report a method that addresses the limitations of existing single-molecular localization methods. The method is based on temporal tracking of the cumulative area occupied by molecules. These temporal fluctuations are tied to molecular size, rates of diffusion and conformational changes. By analysing fluorescent nanospheres and double-stranded DNA molecules of different lengths and topological forms, we demonstrate that our cumulative-area method surpasses the conventional single-molecule localization method in terms of the accuracy of determined diffusion coefficients. Furthermore, the cumulative-area method provides conformational relaxation times of structurally flexible chains along with diffusion coefficients, which together are relevant to work in a wide spectrum of scientific fields.
a Fast and Flexible Method for Meta-Map Building for Icp Based Slam
Kurian, A.; Morin, K. W.
2016-06-01
Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.
A FAST AND FLEXIBLE METHOD FOR META-MAP BUILDING FOR ICP BASED SLAM
Directory of Open Access Journals (Sweden)
A. Kurian
2016-06-01
Full Text Available Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.
A recognition method research based on the heart sound texture map
Directory of Open Access Journals (Sweden)
Huizhong Cheng
2016-06-01
Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.
How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.
Gray, Kurt
2017-09-01
Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.
Conformal dimension theory and application
Mackay, John M
2010-01-01
Conformal dimension measures the extent to which the Hausdorff dimension of a metric space can be lowered by quasisymmetric deformations. Introduced by Pansu in 1989, this concept has proved extremely fruitful in a diverse range of areas, including geometric function theory, conformal dynamics, and geometric group theory. This survey leads the reader from the definitions and basic theory through to active research applications in geometric function theory, Gromov hyperbolic geometry, and the dynamics of rational maps, amongst other areas. It reviews the theory of dimension in metric spaces and of deformations of metric spaces. It summarizes the basic tools for estimating conformal dimension and illustrates their application to concrete problems of independent interest. Numerous examples and proofs are provided. Working from basic definitions through to current research areas, this book can be used as a guide for graduate students interested in this field, or as a helpful survey for experts. Background needed ...
G-MAPSEQ – a new method for mapping reads to a reference genome
Directory of Open Access Journals (Sweden)
Wojciechowski Pawel
2016-06-01
Full Text Available The problem of reads mapping to a reference genome is one of the most essential problems in modern computational biology. The most popular algorithms used to solve this problem are based on the Burrows-Wheeler transform and the FM-index. However, this causes some issues with highly mutated sequences due to a limited number of mutations allowed. G-MAPSEQ is a novel, hybrid algorithm combining two interesting methods: alignment-free sequence comparison and an ultra fast sequence alignment. The former is a fast heuristic algorithm which uses k-mer characteristics of nucleotide sequences to find potential mapping places. The latter is a very fast GPU implementation of sequence alignment used to verify the correctness of these mapping positions. The source code of G-MAPSEQ along with other bioinformatic software is available at: http://gpualign.cs.put.poznan.pl.
International Nuclear Information System (INIS)
Sun Li-Sha; Kang Xiao-Yun; Zhang Qiong; Lin Lan-Xin
2011-01-01
Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems. (general)
Sun, Li-Sha; Kang, Xiao-Yun; Zhang, Qiong; Lin, Lan-Xin
2011-12-01
Based on symbolic dynamics, a novel computationally efficient algorithm is proposed to estimate the unknown initial vectors of globally coupled map lattices (CMLs). It is proved that not all inverse chaotic mapping functions are satisfied for contraction mapping. It is found that the values in phase space do not always converge on their initial values with respect to sufficient backward iteration of the symbolic vectors in terms of global convergence or divergence (CD). Both CD property and the coupling strength are directly related to the mapping function of the existing CML. Furthermore, the CD properties of Logistic, Bernoulli, and Tent chaotic mapping functions are investigated and compared. Various simulation results and the performances of the initial vector estimation with different signal-to-noise ratios (SNRs) are also provided to confirm the proposed algorithm. Finally, based on the spatiotemporal chaotic characteristics of the CML, the conditions of estimating the initial vectors using symbolic dynamics are discussed. The presented method provides both theoretical and experimental results for better understanding and characterizing the behaviours of spatiotemporal chaotic systems.
A novel multispectral glacier mapping method and its performance in Greenland
Citterio, M.; Fausto, R. S.; Ahlstrom, A. P.; Andersen, S. B.
2014-12-01
Multispectral land surface classification methods are widely used for mapping glacier outlines. Significant post-classification manual editing is typically required, and mapping glacier outlines over larger regions remains a rather labour intensive task. In this contribution we introduce a novel method for mapping glacier outlines from multispectral satellite imagery, requiring only minor manual editing.Over the last decade GLIMS (Global Land Ice Measurements from Space) improved the availability of glacier outlines, and in 2012 the Randolph Glacier Inventory (RGI) attained global coverage by compiling existing and new data sources in the wake of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5). With the launch of Landsat 8 in 2013 and the upcoming ESA (European Space Agency) Sentinel 2 missions, the availability of multispectral imagery may grow faster than our ability to process it into timely and reliable glacier outline products. Improved automatic classification methods would enable a full exploitation of these new data sources.We outline the theoretical basis of the proposed classification algorithm, provide a step by step walk-through from raw imagery to finished ice cover grids and vector glacier outlines, and evaluate the performance of the new method in mapping the outlines of glaciers, ice caps and the Greenland Ice Sheet from Landsat 8 OLI imagery. The classification output is compared against manually digitized ice margin positions, the RGI vectors, and the PROMICE (Programme for Monitoring of the Greenland Ice Sheet) aerophotogrammetric map of Greenland ice masses over a sector of the Disko Island surge cluster in West Greenland, the Qassimiut ice sheet lobe in South Greenland, and the A.P. Olsen ice cap in NE Greenland.
Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm
Karaca, Yeliz; Cattani, Carlo
Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
A Color-locus Method for Mapping R V Using Ensembles of Stars
Lee, Albert; Green, Gregory M.; Schlafly, Edward F.; Finkbeiner, Douglas P.; Burgett, William; Chambers, Ken; Flewelling, Heather; Hodapp, Klaus; Kaiser, Nick; Kudritzki, Rolf-Peter; Magnier, Eugene; Metcalfe, Nigel; Wainscoat, Richard; Waters, Christopher
2018-02-01
We present a simple but effective technique for measuring angular variation in R V across the sky. We divide stars from the Pan-STARRS1 catalog into Healpix pixels and determine the posterior distribution of reddening and R V for each pixel using two independent Monte Carlo methods. We find the two methods to be self-consistent in the limits where they are expected to perform similarly. We also find some agreement with high-precision photometric studies of R V in Perseus and Ophiuchus, as well as with a map of reddening near the Galactic plane based on stellar spectra from APOGEE. While current studies of R V are mostly limited to isolated clouds, we have developed a systematic method for comparing R V values for the majority of observable dust. This is a proof of concept for a more rigorous Galactic reddening map.
Directory of Open Access Journals (Sweden)
Masoumeh Delaram
2017-06-01
Full Text Available Background and Objective: Development of critical thinking and practical skills has remained a serious and considerable challenge throughout the nursing educational system in Iran. Conventional methods of teaching such as lectures as the dominant method used in higher education system is a passive style which ignores critical thinking. Therefore, the aim of this study was to compare the effect of instruction by Concept-Mapping and conventional Method on critical thinking skills of nursing students. Materials and Methods:This quasi-experimental study was carried out on 70 nursing students of Tehran Nursing and Midwifery schoolwho were selected through convenient sampling method, then were divided randomly into the two equal Experimental and Control groups. Educational content was presented in the form of Concept-Mapping in the Experimental group and Lecture,Demonstration and Practicalexercises in the control group. Data collection included a demographic information and California Critical Thinking Skills (form B questionnairewhich was completed at the beginning and at the end of the fourth week of Instructional period. Data were analyzed using SPSS software (V: 21, descriptive and analytical Statistics; at the significant level P<0.05. Results: Before the intervention, the mean of critical thinking skill score was 9.71±2.66 in concept mapping group and 9.64 ± 2.14 in conventional group and the difference was not significant (P=0.121, but after the intervention, a significant difference was found between the intervention and conventionalgroup (15.20±2.71 vs 10.25±2.06, P=0.003. Conclusion: Using Concept mapping strategy in the education of nursing students may lead to developing critical thinking skills as one of the important missions of higher education. So it is recommended to usethis method in clinical nursing education.
Directory of Open Access Journals (Sweden)
Meuwissen Theo HE
2007-04-01
Full Text Available Abstract Two previously described QTL mapping methods, which combine linkage analysis (LA and linkage disequilibrium analysis (LD, were compared for their ability to detect and map multiple QTL. The methods were tested on five different simulated data sets in which the exact QTL positions were known. Every simulated data set contained two QTL, but the distances between these QTL were varied from 15 to 150 cM. The results show that the single QTL mapping method (LDLA gave good results as long as the distance between the QTL was large (> 90 cM. When the distance between the QTL was reduced, the single QTL method had problems positioning the two QTL and tended to position only one QTL, i.e. a "ghost" QTL, in between the two real QTL positions. The multi QTL mapping method (MP-LDLA gave good results for all evaluated distances between the QTL. For the large distances between the QTL (> 90 cM the single QTL method more often positioned the QTL in the correct marker bracket, but considering the broader likelihood peaks of the single point method it could be argued that the multi QTL method was more precise. Since the distances were reduced the multi QTL method was clearly more accurate than the single QTL method. The two methods combine well, and together provide a good tool to position single or multiple QTL in practical situations, where the number of QTL and their positions are unknown.
International Nuclear Information System (INIS)
Peng, Xingjie; Wang, Kan; Li, Qing
2014-01-01
Highlights: • A new power mapping method based on Ordinary Kriging (OK) is proposed. • Measurements from DayaBay Unit 1 PWR are used to verify the OK method. • The OK method performs better than the CECOR method. • An optimal neutron detector location strategy based on ordinary kriging and simulated annealing is proposed. - Abstract: The Ordinary Kriging (OK) method is presented that is designed for a core power mapping calculation of pressurized water reactors (PWRs). Measurements from DayaBay Unit 1 PWR are used to verify the accuracy of the OK method. The root mean square (RMS) reconstruction errors are kept at less than 0.35%, and the maximum reconstruction relative errors (RE) are kept at less than 1.02% for the entire operating cycle. The reconstructed assembly power distribution results show that the OK method is fit for core power distribution monitoring. The quality of power distribution obtained by the OK method is partly determined by the neutron detector locations, and the OK method is also applied to solve the optimal neutron detector location problem. The spatially averaged ordinary kriging variance (AOKV) is minimized using simulated annealing, and then, the optimal in-core neutron detector locations are obtained. The result shows that the current neutron detector location of DayaBay Unit 1 reactor is near-optimal
A new method for automated discontinuity trace mapping on rock mass 3D surface model
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
An AHP-derived method for mapping the physical vulnerability of coastal areas at regional scales
Directory of Open Access Journals (Sweden)
G. Le Cozannet
2013-05-01
Full Text Available Assessing coastal vulnerability to climate change at regional scales is now mandatory in France since the adoption of recent laws to support adaptation to climate change. However, there is presently no commonly recognised method to assess accurately how sea level rise will modify coastal processes in the coming decades. Therefore, many assessments of the physical component of coastal vulnerability are presently based on a combined use of data (e.g. digital elevation models, historical shoreline and coastal geomorphology datasets, simple models and expert opinion. In this study, we assess the applicability and usefulness of a multi-criteria decision-mapping method (the analytical hierarchy process, AHP to map physical coastal vulnerability to erosion and flooding in a structured way. We apply the method in two regions of France: the coastal zones of Languedoc-Roussillon (north-western Mediterranean, France and the island of La Réunion (south-western Indian Ocean, notably using the regional geological maps. As expected, the results show not only the greater vulnerability of sand spits, estuaries and low-lying areas near to coastal lagoons in both regions, but also that of a thin strip of erodible cliffs exposed to waves in La Réunion. Despite gaps in knowledge and data, the method is found to provide a flexible and transportable framework to represent and aggregate existing knowledge and to support long-term coastal zone planning through the integration of such studies into existing adaptation schemes.
A privacy-preserving sharing method of electricity usage using self-organizing map
Directory of Open Access Journals (Sweden)
Yuichi Nakamura
2018-03-01
Full Text Available Smart meters for measuring electricity usage are expected in electricity usage management. Although the relevant power supplier stores the measured data, the data are worth sharing among power suppliers because the entire data of a city will be required to control the regional grid stability or demand–supply balance. Even though many techniques and methods of privacy-preserving data mining have been studied to share data while preserving data privacy, a study on sharing electricity usage data is still lacking. In this paper, we propose a sharing method of electricity usage while preserving data privacy using a self-organizing map. Keywords: Privacy preserving, Data sharing, Self-Organizing map
Conformal Killing vectors in Robertson-Walker spacetimes
International Nuclear Information System (INIS)
Maartens, R.; Maharaj, S.d.
1986-01-01
It is well known that Robertson-Walker spacetimes admit a conformal Killingl vector normal to the spacelike homogeneous hypersurfaces. Because these spacetimes are conformally flat, there are a further eight conformal Killing vectors, which are neither normal nor tangent to the homogeneous hypersurfaces. The authors find these further conformal Killing vectors and the Lie algebra of the full G 15 of conformal motions. Conditions on the metric scale factor are determined which reduce some of the conformal Killing vectors to homothetic Killing vectors or Killing vectors, allowing one to regain in a unified way the known special geometries. The non-normal conformal Killing vectors provide a counter-example to show that conformal motions do not, in general, map a fluid flow conformally. These non-normal vectors are also used to find the general solution of the null geodesic equation and photon Liouville equation. (author)
An Integrated GNSS/INS/LiDAR-SLAM Positioning Method for Highly Accurate Forest Stem Mapping
Directory of Open Access Journals (Sweden)
Chuang Qian
2016-12-01
Full Text Available Forest mapping, one of the main components of performing a forest inventory, is an important driving force in the development of laser scanning. Mobile laser scanning (MLS, in which laser scanners are installed on moving platforms, has been studied as a convenient measurement method for forest mapping in the past several years. Positioning and attitude accuracies are important for forest mapping using MLS systems. Inertial Navigation Systems (INSs and Global Navigation Satellite Systems (GNSSs are typical and popular positioning and attitude sensors used in MLS systems. In forest environments, because of the loss of signal due to occlusion and severe multipath effects, the positioning accuracy of GNSS is severely degraded, and even that of GNSS/INS decreases considerably. Light Detection and Ranging (LiDAR-based Simultaneous Localization and Mapping (SLAM can achieve higher positioning accuracy in environments containing many features and is commonly implemented in GNSS-denied indoor environments. Forests are different from an indoor environment in that the GNSS signal is available to some extent in a forest. Although the positioning accuracy of GNSS/INS is reduced, estimates of heading angle and velocity can maintain high accurate even with fewer satellites. GNSS/INS and the LiDAR-based SLAM technique can be effectively integrated to form a sustainable, highly accurate positioning and mapping solution for use in forests without additional hardware costs. In this study, information such as heading angles and velocities extracted from a GNSS/INS is utilized to improve the positioning accuracy of the SLAM solution, and two information-aided SLAM methods are proposed. First, a heading angle-aided SLAM (H-aided SLAM method is proposed that supplies the heading angle from GNSS/INS to SLAM. Field test results show that the horizontal positioning accuracy of an entire trajectory of 800 m is 0.13 m and is significantly improved (by 70% compared to that
Hepatic blood flow mapping by dynamic CT method in liver diseases
International Nuclear Information System (INIS)
Sugano, Shigeo; Mizuyosi, Hideo; Okajima, Tsugio; Ishii, Kouji; Abei, Tohru; Machida, Keiichi
1986-01-01
Two parameters of dynamic CT, peak time (PT) and first moment (M1), were compared among healthy control, chronic hepatitis (CH) and liver cirrhosis (LC). The means of PT and M1 in each 9 (3 x 3) pixels on a slice of hepatic CT were computed and converted to gray spots by gray scale, so that deep gray represented high values and light gray low values of these parameters. The distribution of these gray spots in each pixels was depicted on the slice as a blood flow mapping, and it was compared among the groups. In normal control, dynamic CT showed the shortest PT and deep gray spots were distributed diffusely in the slice. In CH, where PT was longer than control, lighter gray spots were diffusely seen. LC had the longest PT and its mapping showed mottles of light gray and black, the latter indicating the presence of spots with scanty blood flow, scattering throughout the slice. The mapping of M1 gave almost the same picture as PT for each group, revieling that the disappearring time of the media in CH and LC was impaired in the same manner as in PT. This method of hepatic blood flow mapping was thought to be useful to add evidences for the understanding of abnormal blood flow in liver diseases. (author)
Signal-to-noise ratio measurement in parallel MRI with subtraction mapping and consecutive methods
International Nuclear Information System (INIS)
Imai, Hiroshi; Miyati, Tosiaki; Ogura, Akio; Doi, Tsukasa; Tsuchihashi, Toshio; Machida, Yoshio; Kobayashi, Masato; Shimizu, Kouzou; Kitou, Yoshihiro
2008-01-01
When measuring the signal-to-noise ratio (SNR) of an image the used parallel magnetic resonance imaging, it was confirmed that there was a problem in the application of past SNR measurement. With the method of measuring the noise from the background signal, SNR with parallel imaging was higher than that without parallel imaging. In the subtraction method (NEMA standard), which sets a wide region of interest, the white noise was not evaluated correctly although SNR was close to the theoretical value. We proposed two techniques because SNR in parallel imaging was not uniform according to inhomogeneity of the coil sensitivity distribution and geometry factor. Using the first method (subtraction mapping), two images were scanned with identical parameters. The SNR in each pixel divided the running mean (7 by 7 pixels in neighborhood) by standard deviation/√2 in the same region of interest. Using the second (consecutive) method, more than fifty consecutive scans of the uniform phantom were obtained with identical scan parameters. Then the SNR was calculated from the ratio of mean signal intensity to the standard deviation in each pixel on a series of images. Moreover, geometry factors were calculated from SNRs with and without parallel imaging. The SNR and geometry factor using parallel imaging in the subtraction mapping method agreed with those of the consecutive method. Both methods make it possible to obtain a more detailed determination of SNR in parallel imaging and to calculate the geometry factor. (author)
Novel method for measuring a dense 3D strain map of robotic flapping wings
Li, Beiwen; Zhang, Song
2018-04-01
Measuring dense 3D strain maps of the inextensible membranous flapping wings of robots is of vital importance to the field of bio-inspired engineering. Conventional high-speed 3D videography methods typically reconstruct the wing geometries through measuring sparse points with fiducial markers, and thus cannot obtain the full-field mechanics of the wings in detail. In this research, we propose a novel system to measure a dense strain map of inextensible membranous flapping wings by developing a superfast 3D imaging system and a computational framework for strain analysis. Specifically, first we developed a 5000 Hz 3D imaging system based on the digital fringe projection technique using the defocused binary patterns to precisely measure the dynamic 3D geometries of rapidly flapping wings. Then, we developed a geometry-based algorithm to perform point tracking on the precisely measured 3D surface data. Finally, we developed a dense strain computational method using the Kirchhoff-Love shell theory. Experiments demonstrate that our method can effectively perform point tracking and measure a highly dense strain map of the wings without many fiducial markers.
Teamwork: improved eQTL mapping using combinations of machine learning methods.
Directory of Open Access Journals (Sweden)
Marit Ackermann
Full Text Available Expression quantitative trait loci (eQTL mapping is a widely used technique to uncover regulatory relationships between genes. A range of methodologies have been developed to map links between expression traits and genotypes. The DREAM (Dialogue on Reverse Engineering Assessments and Methods initiative is a community project to objectively assess the relative performance of different computational approaches for solving specific systems biology problems. The goal of one of the DREAM5 challenges was to reverse-engineer genetic interaction networks from synthetic genetic variation and gene expression data, which simulates the problem of eQTL mapping. In this framework, we proposed an approach whose originality resides in the use of a combination of existing machine learning algorithms (committee. Although it was not the best performer, this method was by far the most precise on average. After the competition, we continued in this direction by evaluating other committees using the DREAM5 data and developed a method that relies on Random Forests and LASSO. It achieved a much higher average precision than the DREAM best performer at the cost of slightly lower average sensitivity.
DEFF Research Database (Denmark)
Comminal, Raphael Benjamin
materials, where viscoelastic effects cause dynamical instabilities, despite the very simple geometry. This thesis reviews the popular differential constitutive models derived from molecular theories of dilute polymer solutions, polymer networks, and entangled polymer melts, as well as the inelastic...... streamfunction formulation is formally more accurate than the velocity–pressure decoupled method, because it is immune of decoupling errors. Moreover, the absence of decoupling enhances the stability of the calculation. The governing equations (conservation laws and constitutive models) are discretized......–linear–interface–construction technique. In addition, a new Cellwise Conservative Unsplit (CCU) advection scheme is presented. The CCU scheme updates the liquid volume fractions based on cellwise backward‐tracking of the liquid volumes. The algorithm calculates non‐overlapping and conforming adjacent donating regions, which ensures...
Inverse bootstrapping conformal field theories
Li, Wenliang
2018-01-01
We propose a novel approach to study conformal field theories (CFTs) in general dimensions. In the conformal bootstrap program, one usually searches for consistent CFT data that satisfy crossing symmetry. In the new method, we reverse the logic and interpret manifestly crossing-symmetric functions as generating functions of conformal data. Physical CFTs can be obtained by scanning the space of crossing-symmetric functions. By truncating the fusion rules, we are able to concentrate on the low-lying operators and derive some approximate relations for their conformal data. It turns out that the free scalar theory, the 2d minimal model CFTs, the ϕ 4 Wilson-Fisher CFT, the Lee-Yang CFTs and the Ising CFTs are consistent with the universal relations from the minimal fusion rule ϕ 1 × ϕ 1 = I + ϕ 2 + T , where ϕ 1 , ϕ 2 are scalar operators, I is the identity operator and T is the stress tensor.
Conformal Killing horizons and their thermodynamics
Nielsen, Alex B.; Shoom, Andrey A.
2018-05-01
Certain dynamical black hole solutions can be mapped to static spacetimes by conformal metric transformations. This mapping provides a physical link between the conformal Killing horizon of the dynamical black hole and the Killing horizon of the static spacetime. Using the Vaidya spacetime as an example, we show how this conformal relation can be used to derive thermodynamic properties of such dynamical black holes. Although these horizons are defined quasi-locally and can be located by local experiments, they are distinct from other popular notions of quasi-local horizons such as apparent horizons. Thus in the dynamical Vaidya spacetime describing constant accretion of null dust, the conformal Killing horizon, which is null by construction, is the natural horizon to describe the black hole.
Friedrich, Lucas; Winters, Andrew R.; Ferná ndez, David C. Del Rey; Gassner, Gregor J.; Parsani, Matteo; Carpenter, Mark H.
2017-01-01
analysis are discretely mimicked. Special attention is given to the coupling between nonconforming elements as we demonstrate that the standard mortar approach for DG methods does not guarantee entropy stability for non-linear problems, which can lead
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Tilch, Nils; Römer, Alexander; Jochum, Birgit; Schattauer, Ingrid
2014-05-01
In the past years, several times large-scale disasters occurred in Austria, which were characterized not only by flooding, but also by numerous shallow landslides and debris flows. Therefore, for the purpose of risk prevention, national and regional authorities also require more objective and realistic maps with information about spatially variable susceptibility of the geosphere for hazard-relevant gravitational mass movements. There are many and various proven methods and models (e.g. neural networks, logistic regression, heuristic methods) available to create such process-related (e.g. flat gravitational mass movements in soil) suszeptibility maps. But numerous national and international studies show a dependence of the suitability of a method on the quality of process data and parameter maps (f.e. Tilch & Schwarz 2011, Schwarz & Tilch 2011). In this case, it is important that also maps with detailed and process-oriented information on the process-relevant geosphere will be considered. One major disadvantage is that only occasionally area-wide process-relevant information exists. Similarly, in Austria often only soil maps for treeless areas are available. However, in almost all previous studies, randomly existing geological and geotechnical maps were used, which often have been specially adapted to the issues and objectives. This is one reason why very often conceptual soil maps must be derived from geological maps with only hard rock information, which often have a rather low quality. Based on these maps, for example, adjacent areas of different geological composition and process-relevant physical properties are razor sharp delineated, which in nature appears quite rarly. In order to obtain more realistic information about the spatial variability of the process-relevant geosphere (soil cover) and its physical properties, aerogeophysical measurements (electromagnetic, radiometric), carried out by helicopter, from different regions of Austria were interpreted
International Nuclear Information System (INIS)
Delrive, C.
1993-01-01
The evaluation of the safety of a deep geologic repository for dangerous materials requires the knowledge of the interstitial system of the surrounding host rock. A method is proposed for the determination of geologic structures (in particular fractures) from the magnetic susceptibility mapping of drilled cores. The feasibility of the method has been demonstrated using a SQUID magneto-gradient meter. A measurement tool using a new magnetic susceptibility captor and a testing bench have been developed. This tool allows the measurement of rocks with a magnetic susceptibility greater than 10 -5 SI units and can generate magnetic susceptibility maps with 4 x 4 mm 2 pixels. A magnetic visibility criterion has been defined which allows to foresee if a structure is visible or not. According to the measurements done, it is shown that any centimeter-scale structure with a sufficient magnetic contrast (20%) with respect to the matrix is visible. Therefore, the dip and the orientation of such structure can be determined with a 3 degree and a 5 degree precision, respectively. The position of the structure along the core axis is known with a 4 mm precision. On the other hand, about half of the magnetic contrasts observed do not correspond to the visual analyses and can be explained by very small variations of the mineralogic composition. This last point offers some interesting ways for future research using magnetic susceptibility mapping. (J.S.). 31 refs., 90 figs., 18 tabs., 2 photos., 6 appends
Mapping US Urban Extents from MODIS Data Using One-Class Classification Method
Directory of Open Access Journals (Sweden)
Bo Wan
2015-08-01
Full Text Available Urban areas are one of the most important components of human society. Their extents have been continuously growing during the last few decades. Accurate and timely measurements of the extents of urban areas can help in analyzing population densities and urban sprawls and in studying environmental issues related to urbanization. Urban extents detected from remotely sensed data are usually a by-product of land use classification results, and their interpretation requires a full understanding of land cover types. In this study, for the first time, we mapped urban extents in the continental United States using a novel one-class classification method, i.e., positive and unlabeled learning (PUL, with multi-temporal Moderate Resolution Imaging Spectroradiometer (MODIS data for the year 2010. The Defense Meteorological Satellite Program Operational Linescan System (DMSP-OLS night stable light data were used to calibrate the urban extents obtained from the one-class classification scheme. Our results demonstrated the effectiveness of the use of the PUL algorithm in mapping large-scale urban areas from coarse remote-sensing images, for the first time. The total accuracy of mapped urban areas was 92.9% and the kappa coefficient was 0.85. The use of DMSP-OLS night stable light data can significantly reduce false detection rates from bare land and cropland far from cities. Compared with traditional supervised classification methods, the one-class classification scheme can greatly reduce the effort involved in collecting training datasets, without losing predictive accuracy.
Gilmore, T. E.; Zlotnik, V. A.; Johnson, M.
2017-12-01
Groundwater table elevations are one of the most fundamental measurements used to characterize unconfined aquifers, groundwater flow patterns, and aquifer sustainability over time. In this study, we developed an analytical model that relies on analysis of groundwater elevation contour (equipotential) shape, aquifer transmissivity, and streambed gradient between two parallel, perennial streams. Using two existing regional water table maps, created at different times using different methods, our analysis of groundwater elevation contours, transmissivity and streambed gradient produced groundwater recharge rates (42-218 mm yr-1) that were consistent with previous independent recharge estimates from different methods. The three regions we investigated overly the High Plains Aquifer in Nebraska and included some areas where groundwater is used for irrigation. The three regions ranged from 1,500 to 3,300 km2, with either Sand Hills surficial geology, or Sand Hills transitioning to loess. Based on our results, the approach may be used to increase the value of existing water table maps, and may be useful as a diagnostic tool to evaluate the quality of groundwater table maps, identify areas in need of detailed aquifer characterization and expansion of groundwater monitoring networks, and/or as a first approximation before investing in more complex approaches to groundwater recharge estimation.
An efficient hole-filling method based on depth map in 3D view generation
Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.
Weights of Evidence Method for Landslide Susceptibility Mapping in Takengon, Central Aceh, Indonesia
Pamela; Sadisun, Imam A.; Arifianti, Yukni
2018-02-01
Takengon is an area prone to earthquake disaster and landslide. On July 2, 2013, Central Aceh earthquake induced large numbers of landslides in Takengon area, which resulted in casualties of 39 people. This location was chosen to assess the landslide susceptibility of Takengon, using a statistical method, referred to as the weight of evidence (WoE). This WoE model was applied to indicate the main factors influencing the landslide susceptible area and to derive landslide susceptibility map of Takengon. The 251 landslides randomly divided into two groups of modeling/training data (70%) and validation/test data sets (30%). Twelve thematic maps of evidence are slope degree, slope aspect, lithology, land cover, elevation, rainfall, lineament, peak ground acceleration, curvature, flow direction, distance to river and roads used as landslide causative factors. According to the AUC, the significant factor controlling the landslide is the slope, the slope aspect, peak ground acceleration, elevation, lithology, flow direction, lineament, and rainfall respectively. Analytical result verified by using test data of landslide shows AUC prediction rate is 0.819 and AUC success rate with all landslide data included is 0.879. This result showed the selective factors and WoE method as good models for assessing landslide susceptibility. The landslide susceptibility map of Takengon shows the probabilities, which represent relative degrees of susceptibility for landslide proneness in Takengon area.
Land-Use and Land-Cover Mapping Using a Gradable Classification Method
Directory of Open Access Journals (Sweden)
Keigo Kitada
2012-05-01
Full Text Available Conventional spectral-based classification methods have significant limitations in the digital classification of urban land-use and land-cover classes from high-resolution remotely sensed data because of the lack of consideration given to the spatial properties of images. To recognize the complex distribution of urban features in high-resolution image data, texture information consisting of a group of pixels should be considered. Lacunarity is an index used to characterize different texture appearances. It is often reported that the land-use and land-cover in urban areas can be effectively classified using the lacunarity index with high-resolution images. However, the applicability of the maximum-likelihood approach for hybrid analysis has not been reported. A more effective approach that employs the original spectral data and lacunarity index can be expected to improve the accuracy of the classification. A new classification procedure referred to as “gradable classification method” is proposed in this study. This method improves the classification accuracy in incremental steps. The proposed classification approach integrates several classification maps created from original images and lacunarity maps, which consist of lacnarity values, to create a new classification map. The results of this study confirm the suitability of the gradable classification approach, which produced a higher overall accuracy (68% and kappa coefficient (0.64 than those (65% and 0.60, respectively obtained with the maximum-likelihood approach.
Terrestrial gamma radiation baseline mapping using ultra low density sampling methods
International Nuclear Information System (INIS)
Kleinschmidt, R.; Watson, D.
2016-01-01
(overbank sediment). • A generic terrestrial air kerma background value is proposed for Queensland, Australia. • Validation of three catchments comparing in-situ measurements, and radiometric & non-radiometric laboratory methods confirm suitability of the methodology. • Large land areas may be mapped using the catchment sampling method at significantly lower cost as opposed to resource intense comprehensive sampling and measurement programs.
Mapping ice-bonded permafrost with electrical methods in Sisimiut, West Greenland
DEFF Research Database (Denmark)
Ingeman-Nielsen, Thomas
2006-01-01
Permafrost delineation and thickness determination is of great importance in engineering related projects in arctic areas. In this paper, 2D geoelectrical measurements are applied and evaluated for permafrost mapping in an area in West Greenland. Multi-electrode resistivity profiles (MEP) have been...... collected and are compared with borehole information. It is shown that the permafrost thickness in this case is grossly overestimated by a factor of two to three. The difference between the inverted 2D resistivity sections and the borehole information is explained by macro-anisotropy due to the presence...... of horizontal ice-lenses in the frozen clay deposits. It is concluded that where the resistivity method perform well for lateral permafrost mapping, great care should be taken in evaluating permafrost thickness based on 2D resistivity profiles alone. Additional information from boreholes or other geophysical...
Correlation of Geophysical and Geotechnical Methods for Sediment Mapping in Sungai Batu, Kedah
Zakaria, M. T.; Taib, A.; Saidin, M. M.; Saad, R.; Muztaza, N. M.; Masnan, S. S. K.
2018-04-01
Exploration geophysics is widely used to map the subsurface characteristics of a region, to understand the underlying rock structures and spatial distribution of rock units. 2-D resistivity and seismic refraction methods were conducted in Sungai Batu locality with objective to identify and map the sediment deposit with correlation of borehole record. 2-D resistivity data was acquire using ABEM SAS4000 system with Pole-dipole array and 2.5 m minimum electrode spacing while for seismic refraction ABEM MK8 seismograph was used to record the seismic data and 5 kg sledgehammer used as a seismic source with geophones interval of 5 m spacing. The inversion model of 2-D resistivity result shows that, the resistivity values 500 Ωm as the hard layer for this study area. The seismic result indicates that the velocity values 3600 m/s interpreted as the hard layer in this locality.
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
A high-resolution computational localization method for transcranial magnetic stimulation mapping.
Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro
2018-05-15
Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Lukas Kypus
2014-01-01
Full Text Available There is never-ending race for the competitive advantage that forces RFID technology service integrators to focus more on used technology qualitative aspects and theirs impacts inside RFID ecosystem. This paper contributes to UHF RFID reader qualitative parameters evaluation and assessment problematic. It presents and describes in details indirect method and procedure of sensitivity measurement created for UHF RFID readers. We applied this method on RFID readers within prepared test environment and confirmed long term intention and recognized trend. Due to regulations limitations, there is not possible to increase output power over defined limits, but there are possibilities to influence reader sensitivity. Our proposal is to use customized comparative measurement method with insertion loss compensation for return link. Beside the main goal achievement, results show as well the qualitative status of development snapshot of reader. Method and following experiment helped us to gain an external view, current values of important parameters and motivation we want to follow up on as well as compared developed reader with its commercial competitors.
Wako, Hiromichi; Ishiuchi, Shun-Ichi; Kato, Daichi; Féraud, Géraldine; Dedonder-Lardeux, Claude; Jouvet, Christophe; Fujii, Masaaki
2017-05-03
The conformer-selected ultraviolet (UV) and infrared (IR) spectra of protonated noradrenaline were measured using an electrospray/cryogenic ion trap technique combined with photo-dissociation spectroscopy. By comparing the UV photo dissociation (UVPD) spectra with the UV-UV hole burning (HB) spectra, it was found that five conformers coexist under ultra-cold conditions. Based on the spectral features of the IR dip spectra of each conformer, two different conformations on the amine side chain were identified. Three conformers (group I) were assigned to folded and others (group II) to extended structures by comparing the observed IR spectra with the calculated ones. Observation of the significantly less-stable extended conformers strongly suggests that the extended structures are dominant in solution and are detected in the gas phase by kinetic trapping. The conformers in each group are assignable to rotamers of OH orientations in the catechol ring. By comparing the UV-UV HB spectra and the calculated Franck-Condon spectra obtained by harmonic vibrational analysis of the S 1 state, with the aid of relative stabilization energies of each conformer in the S 0 state, the absolute orientations of catechol OHs of the observed five conformers were successfully determined. It was found that the 0-0 transition of one folded conformer is red-shifted by about 1000 cm -1 from the others. The significant red-shift was explained by a large contribution of the πσ* state to S 1 in the conformer in which an oxygen atom of the meta-OH group is close to the ammonium group.
Directory of Open Access Journals (Sweden)
Shanshan He
2015-10-01
Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.
Directory of Open Access Journals (Sweden)
Bahram Nasr Isfahani
2006-09-01
Full Text Available Mutations in the rpoB locus confer conformational changes leading to defective binding of rifampin (RIF to rpoB and consequently resistance in Mycobacterium tuberculosis. Polymerase chain reaction-single-strand conformation polymorphism (PCR-SSCP was established as a rapid screening test for the detection of mutations in the rpoB gene, and direct sequencing has been unambiguously applied to characterize mutations. A total of 37 of Iranian isolates of M. tuberculosis, 16 sensitive and 21 resistant to RIF, were used in this study. A 193-bp region of the rpoB gene was amplified and PCR-SSCP patterns were determined by electrophoresis in 10% acrylamide gel and silver staining. Also, 21 samples of 193-bp rpoB amplicons with different PCR-SSCP patterns from RIFr and 10 from RIFs were sequenced. Seven distinguishable PCR-SSCP patterns were recognized in the 21 Iranian RIFr strains, while 15 out of 16 RIFs isolates demonstrated PCR-SSCP banding patterns similar to that of sensitive standard strain H37Rv. However one of the sensitive isolates demonstrated a different pattern. There were seen six different mutations in the amplified region of rpoB gene: codon 516(GAC/GTC, 523(GGG/GGT, 526(CAC/TAC, 531(TCG/TTG, 511(CTG/TTG, and 512(AGC/TCG. This study demonstrated the high specificity (93.8% and sensitivity (95.2% of PCR-SSCP method for detection of mutation in rpoB gene; 85.7% of RIFr strains showed a single mutation and 14.3% had no mutations. Three strains showed mutations caused polymorphism. Our data support the common notion that rifampin resistance genotypes are generally present mutations in codons 531 and 526, most frequently found in M. tuberculosis populations regardless of geographic origin.
Energy Technology Data Exchange (ETDEWEB)
Gauger, Thomas [Federal Agricultural Research Centre, Braunschweig (DE). Inst. of Agroecology (FAL-AOE); Stuttgart Univ. (Germany). Inst. of Navigation; Haenel, Hans-Dieter; Roesemann, Claus [Federal Agricultural Research Centre, Braunschweig (DE). Inst. of Agroecology (FAL-AOE)
2008-09-15
The report on the implementation of the UNECE convention on long-range transboundary air pollution Pt.1, deposition loads (methods, modeling and mapping results, trends) includes the following chapters: Introduction, deposition on air pollutants used for the input for critical loads in exceeding calculations, methods applied for mapping total deposition loads, mapping wet deposition, wet deposition mapping results, mapping dry deposition, dry deposition mapping results, cloud and fog mapping results, total deposition mapping results, modeling the air concentration of acidifying components and heavy metals, agricultural emissions of acidifying and eutrophying species.
Assessment of dynamic probabilistic methods for mapping snow cover in Québec Canada
De Seve, D.; Perreault, L.; Vachon, F.; Guay, F.; choquette, Y.
2012-04-01
with an ensemble mapping approach. The ensemble was generated from a Monte Carlo method. The second one relies on a probabilistic clustering method based on Bayesian Gaussian mixture models. Mixtures of probability distributions become natural models to represent data sets where the observations may have arisen from several distinct statistical populations. Each method can provide a map of uncertainty for the ground and the snow classes, which is a huge benefit for forecasters. Initial results have shown the difficulty of mapping the border between the snow and the ground with traditional approaches. In addition, the application of the mixture models reveals the presence of a third class, which seems to characterize the transition zone between snow and soil.
Elumalai, Vetrimurugan; Brindha, K; Sithole, Bongani; Lakshmanan, Elango
2017-04-01
Mapping groundwater contaminants and identifying the sources are the initial steps in pollution control and mitigation. Due to the availability of different mapping methods and the large number of emerging pollutants, these methods need to be used together in decision making. The present study aims to map the contaminated areas in Richards Bay, South Africa and compare the results of ordinary kriging (OK) and inverse distance weighted (IDW) interpolation techniques. Statistical methods were also used for identifying contamination sources. Na-Cl groundwater type was dominant followed by Ca-Mg-Cl. Data analysis indicate that silicate weathering, ion exchange and fresh water-seawater mixing are the major geochemical processes controlling the presence of major ions in groundwater. Factor analysis also helped to confirm the results. Overlay analysis by OK and IDW gave different results. Areas where groundwater was unsuitable as a drinking source were 419 and 116 km 2 for OK and IDW, respectively. Such diverse results make decision making difficult, if only one method was to be used. Three highly contaminated zones within the study area were more accurately identified by OK. If large areas are identified as being contaminated such as by IDW in this study, the mitigation measures will be expensive. If these areas were underestimated, then even though management measures are taken, it will not be effective for a longer time. Use of multiple techniques like this study will help to avoid taking harsh decisions. Overall, the groundwater quality in this area was poor, and it is essential to identify alternate drinking water source or treat the groundwater before ingestion.
Damage Detection Method of Wind Turbine Blade Using Acoustic Emission Signal Mapping
Energy Technology Data Exchange (ETDEWEB)
Han, Byeong Hee; Yoon, Dong JIn [Korea Research Institute of Standards and Seience, Daejeon (Korea, Republic of)
2011-02-15
Acoustic emission(AE) has emerged as a powerful nondestructive tool to detect any further growth or expansion of preexisting defects or to characterize failure mechanisms. Recently, this kind of technique, that is an in-situ monitoring of inside damages of materials or structures, becomes increasingly popular for monitoring the integrity of large structures like a huge wind turbine blade. Therefore, it is required to find a symptom of damage propagation before catastrophic failure through a continuous monitoring. In this study, a new damage location method has been proposed by using signal napping algorithm, and an experimental verification is conducted by using small wind turbine blade specimen: a part of 750 kW real blade. The results show that this new signal mapping method has high advantages such as a flexibility for sensor location, improved accuracy, high detectability. The newly proposed method was compared with traditional AE source location method based on arrival time difference
Integration of singularity and zonality methods for prospectivity map of blind mineralization
Directory of Open Access Journals (Sweden)
samaneh safari
2016-12-01
Full Text Available Singularity based on fractal and multifractal is a technique for detection of depletion and enrichment for geochemical exploration, while the index of vertical geochemical zonality (Vz of Pb.Zn/Cu.Ag is a practical method for exploration of blind porphyry copper mineralization. In this study, these methods are combined for recognition, delineation, and enrichment of Vz in Jebal- Barez in the south of Iran. The studied area is located in the Shar-E-Babak–Bam ore field in the southern part of the Central Iranian volcano–plutonic magmatic arc. The region has a semiarid climate, mountainous topography, and poor vegetation cover. Seven hundreds samples of stream sedimentary were taken from the region. Geochemical data subset represent a total drainage basin area. Samples are analyzed for Cu, Zn, Ag, Pb, Au, W, As, Hg, Ba, Bi by atomic absorption method. Prospectivity map for blind mineralization is represented in this area. The results are in agreement with previous studies which have been focused in this region. Kerver is detected as the main blind mineralization in Jebal- Barz which had been previously intersected by drilled borehole for exploration purposes. In this research, it has been demonstrated that employing the singularity of geochemical zonality anomalies method, as opposed to using singularity of elements, improves mapping of mineral prospectivity.
Comparing registration methods for mapping brain change using tensor-based morphometry.
Yanovsky, Igor; Leow, Alex D; Lee, Suh; Osher, Stanley J; Thompson, Paul M
2009-10-01
Measures of brain changes can be computed from sequential MRI scans, providing valuable information on disease progression for neuroscientific studies and clinical trials. Tensor-based morphometry (TBM) creates maps of these brain changes, visualizing the 3D profile and rates of tissue growth or atrophy. In this paper, we examine the power of different nonrigid registration models to detect changes in TBM, and their stability when no real changes are present. Specifically, we investigate an asymmetric version of a recently proposed Unbiased registration method, using mutual information as the matching criterion. We compare matching functionals (sum of squared differences and mutual information), as well as large-deformation registration schemes (viscous fluid and inverse-consistent linear elastic registration methods versus Symmetric and Asymmetric Unbiased registration) for detecting changes in serial MRI scans of 10 elderly normal subjects and 10 patients with Alzheimer's Disease scanned at 2-week and 1-year intervals. We also analyzed registration results when matching images corrupted with artificial noise. We demonstrated that the unbiased methods, both symmetric and asymmetric, have higher reproducibility. The unbiased methods were also less likely to detect changes in the absence of any real physiological change. Moreover, they measured biological deformations more accurately by penalizing bias in the corresponding statistical maps.
Directory of Open Access Journals (Sweden)
V. Vakhshoori
2016-09-01
Full Text Available A regional scale basin susceptible to landslide located in Qaemshahr area in northern Iran was chosen for comparing the reliability of weight of evidence (WofE, fuzzy logic, and frequency ratio (FR methods for landslide susceptibility mapping. The locations of 157 landslides were identified using Google Earth® or extracted from archived data, from which, 22 rockslides were eliminated from the data-set due to their different conditions. The 135 remaining landslides were randomly divided into two groups of modelling (70% and validation (30% data-sets. Elevation, slope degree, slope aspect, lithology, land use/cover, normalized difference vegetation index, rainfall, distance to drainage network, roads, and faults were considered as landslide causative factors. The landslide susceptibility maps were prepared using the three mentioned methods. The validation process was measured by the success and prediction rates calculated by area under receiver operating characteristic curve. The ‘OR’, ‘AND’, ‘SUM’, and ‘PRODUCT’ operators of the fuzzy logic method were unacceptable because these operators classify the target area into either very high or very low susceptible zones that are inconsistent with the physical conditions of the study area. The results of fuzzy ‘GAMMA’ operators were relatively reliable while, FR and WofE methods showed results that are more reliable.
Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies
Chen, Kewei; Reiman, E. M.; Lawson, M.; Yun, Lang-sheng; Bandy, D.; Palant, A.
1996-12-01
While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control (baseline) scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0-60 s after radiotracer administration, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20-80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the application of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted the authors to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.
Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies
International Nuclear Information System (INIS)
Chen, K.; Reiman, E.M.; Good Samaritan Regional Medical Center, Phoenix, AZ; Lawson, M.; Yun, L.S.; Bandy, D.
1996-01-01
While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0--60 s after radiotracer administrations, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20--80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the applications of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted us to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging
He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han
2015-01-01
Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...
International Nuclear Information System (INIS)
Ragusa, J. C.
2004-01-01
In this paper, a method for performing spatially adaptive computations in the framework of multigroup diffusion on 2-D and 3-D Cartesian grids is investigated. The numerical error, intrinsic to any computer simulation of physical phenomena, is monitored through an a posteriori error estimator. In a posteriori analysis, the computed solution itself is used to assess the accuracy. By efficiently estimating the spatial error, the entire computational process is controlled through successively adapted grids. Our analysis is based on a finite element solution of the diffusion equation. Bilinear test functions are used. The derived a posteriori error estimator is therefore based on the Hessian of the numerical solution. (authors)
DEFF Research Database (Denmark)
Al Shakhshir, Saher; Zhou, Fan; Kær, Søren Knudsen
The degradation of the electrochemical reaction of the proton exchange membrane water electrolysis (PEMWE) can be characterized using in-situ current mapping measurements (CMM). CMM is significantly affected by the amount of clamping pressure and method. In this work the current is mapped...
Defects in conformal field theory
International Nuclear Information System (INIS)
Billò, Marco; Gonçalves, Vasco; Lauria, Edoardo; Meineri, Marco
2016-01-01
We discuss consequences of the breaking of conformal symmetry by a flat or spherical extended operator. We adapt the embedding formalism to the study of correlation functions of symmetric traceless tensors in the presence of the defect. Two-point functions of a bulk and a defect primary are fixed by conformal invariance up to a set of OPE coefficients, and we identify the allowed tensor structures. A correlator of two bulk primaries depends on two cross-ratios, and we study its conformal block decomposition in the case of external scalars. The Casimir equation in the defect channel reduces to a hypergeometric equation, while the bulk channel blocks are recursively determined in the light-cone limit. In the special case of a defect of codimension two, we map the Casimir equation in the bulk channel to the one of a four-point function without defect. Finally, we analyze the contact terms of the stress-tensor with the extended operator, and we deduce constraints on the CFT data. In two dimensions, we relate the displacement operator, which appears among the contact terms, to the reflection coefficient of a conformal interface, and we find unitarity bounds for the latter.
Defects in conformal field theory
Energy Technology Data Exchange (ETDEWEB)
Billò, Marco [Dipartimento di Fisica, Università di Torino, and Istituto Nazionale di Fisica Nucleare - sezione di Torino,Via P. Giuria 1 I-10125 Torino (Italy); Gonçalves, Vasco [Centro de Física do Porto,Departamento de Física e Astronomia Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); ICTP South American Institute for Fundamental Research Instituto de Física Teórica,UNESP - University Estadual Paulista,Rua Dr. Bento T. Ferraz 271, 01140-070, São Paulo, SP (Brazil); Lauria, Edoardo [Institute for Theoretical Physics, KU Leuven, Celestijnenlaan 200D, B-3001 Leuven (Belgium); Meineri, Marco [Perimeter Institute for Theoretical Physics,Waterloo, Ontario, N2L 2Y5 (Canada); Scuola Normale Superiore, and Istituto Nazionale di Fisica Nucleare - sezione di Pisa,Piazza dei Cavalieri 7 I-56126 Pisa (Italy)
2016-04-15
We discuss consequences of the breaking of conformal symmetry by a flat or spherical extended operator. We adapt the embedding formalism to the study of correlation functions of symmetric traceless tensors in the presence of the defect. Two-point functions of a bulk and a defect primary are fixed by conformal invariance up to a set of OPE coefficients, and we identify the allowed tensor structures. A correlator of two bulk primaries depends on two cross-ratios, and we study its conformal block decomposition in the case of external scalars. The Casimir equation in the defect channel reduces to a hypergeometric equation, while the bulk channel blocks are recursively determined in the light-cone limit. In the special case of a defect of codimension two, we map the Casimir equation in the bulk channel to the one of a four-point function without defect. Finally, we analyze the contact terms of the stress-tensor with the extended operator, and we deduce constraints on the CFT data. In two dimensions, we relate the displacement operator, which appears among the contact terms, to the reflection coefficient of a conformal interface, and we find unitarity bounds for the latter.
Encoding methods for B1+ mapping in parallel transmit systems at ultra high field
Tse, Desmond H. Y.; Poole, Michael S.; Magill, Arthur W.; Felder, Jörg; Brenner, Daniel; Jon Shah, N.
2014-08-01
Parallel radiofrequency (RF) transmission, either in the form of RF shimming or pulse design, has been proposed as a solution to the B1+ inhomogeneity problem in ultra high field magnetic resonance imaging. As a prerequisite, accurate B1+ maps from each of the available transmit channels are required. In this work, four different encoding methods for B1+ mapping, namely 1-channel-on, all-channels-on-except-1, all-channels-on-1-inverted and Fourier phase encoding, were evaluated using dual refocusing acquisition mode (DREAM) at 9.4 T. Fourier phase encoding was demonstrated in both phantom and in vivo to be the least susceptible to artefacts caused by destructive RF interference at 9.4 T. Unlike the other two interferometric encoding schemes, Fourier phase encoding showed negligible dependency on the initial RF phase setting and therefore no prior B1+ knowledge is required. Fourier phase encoding also provides a flexible way to increase the number of measurements to increase SNR, and to allow further reduction of artefacts by weighted decoding. These advantages of Fourier phase encoding suggest that it is a good choice for B1+ mapping in parallel transmit systems at ultra high field.
Viscous conformal gauge theories
DEFF Research Database (Denmark)
Toniato, Arianna; Sannino, Francesco; Rischke, Dirk H.
2017-01-01
We present the conformal behavior of the shear viscosity-to-entropy density ratio and the fermion-number diffusion coefficient within the perturbative regime of the conformal window for gauge-fermion theories.......We present the conformal behavior of the shear viscosity-to-entropy density ratio and the fermion-number diffusion coefficient within the perturbative regime of the conformal window for gauge-fermion theories....
International Nuclear Information System (INIS)
Kozameh, C.N.; Newman, E.T.; Tod, K.P.
1985-01-01
Conformal transformations in four-dimensional. In particular, a new set of two necessary and sufficient conditions for a space to be conformal to an Einstein space is presented. The first condition defines the class of spaces conformal to C spaces, whereas the last one (the vanishing of the Bach tensor) gives the particular subclass of C spaces which are conformally related to Einstein spaces. (author)
International Nuclear Information System (INIS)
Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang
2015-01-01
Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method
Energy Technology Data Exchange (ETDEWEB)
Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)
2015-09-08
Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.
Directory of Open Access Journals (Sweden)
Peter E Turkeltaub
2015-04-01
Full Text Available Voxel-based lesion-symptom mapping (VLSM has provided valuable insights into the neural underpinnings of various language functions. Integrating lesion mapping methods with other neuroscience techniques may provide new opportunities to investigate questions related both to the neurobiology of language and to plasticity after brain injury. For example, recent diffusion tensor imaging studies have explored relationships between aphasia symptomology and damage in specific white matter tracts (Forkel et al., 2014 or disruption of the white matter connectome (Bonilha, Rorden, & Fridriksson, 2014. VLSM has also recently been used to assess correlations between lesion location and response to transcranial direct current stimulation aphasia treatment (Campana, Caltagirone, & Marangolo, 2015. We have recently undertaken studies integrating VLSM with other techniques, including voxel-based morphometry (VBM and functional MRI, in order to investigate how parts of the brain spared by stroke contribute to recovery. VLSM can be used in this context to map lesions associated with particular patterns of plasticity in brain structure, function, or connectivity. We have also used VLSM to estimate the variance in behavior due to the stroke itself so that this lesion-symptom relationship can be controlled for when examining the contributions of the rest of the brain. Using this approach in combination with VBM, we have identified areas of the right temporoparietal cortex that appear to undergo hypertrophy after stroke and compensate for speech production deficits. In this talk, I will review recent advances in integrating lesion-symptom mapping with other imaging and brain stimulation techniques in order to better understand the brain basis of language and of aphasia recovery.
Conformal deformation of Riemann space and torsion
International Nuclear Information System (INIS)
Pyzh, V.M.
1981-01-01
Method for investigating conformal deformations of Riemann spaces using torsion tensor, which permits to reduce the second ' order equations for Killing vectors to the system of the first order equations, is presented. The method is illustrated using conformal deformations of dimer sphere as an example. A possibility of its use when studying more complex deformations is discussed [ru
Superspace conformal field theory
Energy Technology Data Exchange (ETDEWEB)
Quella, Thomas [Koeln Univ. (Germany). Inst. fuer Theoretische Physik; Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-07-15
Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.
Superspace conformal field theory
International Nuclear Information System (INIS)
Quella, Thomas
2013-07-01
Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.
DEFF Research Database (Denmark)
Aage, Helle Karina; Korsbech, Uffe C C; Bargholz, Kim
1999-01-01
A new technique for processing airborne gamma ray spectrometry data has been developed. It is based on the noise adjusted singular value decomposition method introduced by Hovgaard in 1997. The new technique opens for mapping of very low contamination levels. It is tested with data from Latvia...... where the remaining contamination from the 1986 Chernobyl accident together with fallout from the atmospheric nuclear weapon tests includes Cs-137 at levels often well below 1 kBq/m(2) equivalent surface contamination. The limiting factors for obtaining reliable results are radon in the air, spectrum...
Development of NDT simulator with method of mapping for detection of pipe wall thinning using EMAT
International Nuclear Information System (INIS)
Yamaguchi, Hiroshi; Kojima, Fumio; Kosaka, Daigo
2009-01-01
This paper is concerned with a simulator related to nondestructive test using Electromagnetic Acoustic Transducer (EMAT). The simulator developed here can be applied to pipe wall thinning of stainless steel used in nuclear power plants. First, mathematical models for the inspection are given by a transient eddy current equation and by a time dependent elastic wave equation in two dimensions. Secondly, shape of pipe wall thinning is modeled by B-spline function and is applied to the mathematical models using method of mapping. Finally, the validity of the proposed simulator is shown through numerical experiment. (author)
A novel method for designing S-boxes based on chaotic maps
International Nuclear Information System (INIS)
Tang Guoping; Liao Xiaofeng; Chen Yong
2005-01-01
A method for obtaining cryptographically strong 8 x 8 S-boxes based on chaotic maps is presented and the cryptographical properties such as bijection, nonlinearity, strict avalanche criterion, output bits independence criterion and equiprobable input/output XOR distribution of these S-boxes are analyzed in detail. The results of numerical analysis also show that the S-boxes proposed are of the above properties and can resist the differential attack. Furthermore, our approach is suitable for practical application in designing cryptosystem
Monitoring of Landslide Areas with the Use of Contemporary Methods of Measuring and Mapping
Directory of Open Access Journals (Sweden)
Skrzypczak Izabela
2017-03-01
Full Text Available In recent years, there is an increase of landslide risk observed, which is associated with intensive anthropogenic activities and extreme weather conditions. Appropriate monitoring and proper development of measurements resulting as maps of areas at risk of landslides enables us to estimate the risk in the social and economic aspect. Landslide monitoring in the framework of SOPO project is performed by several methods of measurements: monitoring of surface (GNSS measurement and laser scanning, monitoring in-deepth (inclinometer measurements and monitoring of the hydrological changes and precipitation (measuring changes in water-table and rainfall.
Wu, Madeline; Davidson, Norman
1981-01-01
A transmission electron microscope method for gene mapping by in situ hybridization to Drosophila polytene chromosomes has been developed. As electron-opaque labels, we use colloidal gold spheres having a diameter of 25 nm. The spheres are coated with a layer of protein to which Escherichia coli single-stranded DNA is photochemically crosslinked. Poly(dT) tails are added to the 3' OH ends of these DNA strands, and poly(dA) tails are added to the 3' OH ends of a fragmented cloned Drosophila DN...
Fuzzy Shannon Entropy: A Hybrid GIS-Based Landslide Susceptibility Mapping Method
Directory of Open Access Journals (Sweden)
Majid Shadman Roodposhti
2016-09-01
Full Text Available Assessing Landslide Susceptibility Mapping (LSM contributes to reducing the risk of living with landslides. Handling the vagueness associated with LSM is a challenging task. Here we show the application of hybrid GIS-based LSM. The hybrid approach embraces fuzzy membership functions (FMFs in combination with Shannon entropy, a well-known information theory-based method. Nine landslide-related criteria, along with an inventory of landslides containing 108 recent and historic landslide points, are used to prepare a susceptibility map. A random split into training (≈70% and testing (≈30% samples are used for training and validation of the LSM model. The study area—Izeh—is located in the Khuzestan province of Iran, a highly susceptible landslide zone. The performance of the hybrid method is evaluated using receiver operating characteristics (ROC curves in combination with area under the curve (AUC. The performance of the proposed hybrid method with AUC of 0.934 is superior to multi-criteria evaluation approaches using a subjective scheme in this research in comparison with a previous study using the same dataset through extended fuzzy multi-criteria evaluation with AUC value of 0.894, and was built on the basis of decision makers’ evaluation in the same study area.
Operator algebras and conformal field theory
International Nuclear Information System (INIS)
Gabbiani, F.; Froehlich, J.
1993-01-01
We define and study two-dimensional, chiral conformal field theory by the methods of algebraic field theory. We start by characterizing the vacuum sectors of such theories and show that, under very general hypotheses, their algebras of local observables are isomorphic to the unique hyperfinite type III 1 factor. The conformal net determined by the algebras of local observables is proven to satisfy Haag duality. The representation of the Moebius group (and presumably of the entire Virasoro algebra) on the vacuum sector of a conformal field theory is uniquely determined by the Tomita-Takesaki modular operators associated with its vacuum state and its conformal net. We then develop the theory of Mebius covariant representations of a conformal net, using methods of Doplicher, Haag and Roberts. We apply our results to the representation theory of loop groups. Our analysis is motivated by the desire to find a 'background-independent' formulation of conformal field theories. (orig.)
A novel method to design S-box based on chaotic map and genetic algorithm
International Nuclear Information System (INIS)
Wang, Yong; Wong, Kwok-Wo; Li, Changbing; Li, Yang
2012-01-01
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.
Development and Application of Multidimensional HPLC Mapping Method for O-linked Oligosaccharides
Directory of Open Access Journals (Sweden)
Koichi Kato
2011-12-01
Full Text Available Glycosylation improves the solubility and stability of proteins, contributes to the structural integrity of protein functional sites, and mediates biomolecular recognition events involved in cell-cell communications and viral infections. The first step toward understanding the molecular mechanisms underlying these carbohydrate functionalities is a detailed characterization of glycan structures. Recently developed glycomic approaches have enabled comprehensive analyses of N-glycosylation profiles in a quantitative manner. However, there are only a few reports describing detailed O-glycosylation profiles primarily because of the lack of a widespread standard method to identify O-glycan structures. Here, we developed an HPLC mapping method for detailed identification of O-glycans including neutral, sialylated, and sulfated oligosaccharides. Furthermore, using this method, we were able to quantitatively identify isomeric products from an in vitro reaction catalyzed by N-acetylglucosamine-6O-sulfotransferases and obtain O-glycosylation profiles of serum IgA as a model glycoprotein.
International Nuclear Information System (INIS)
Pin, F.G.; Ketelle, R.H.
1983-11-01
Electromagnetic methods have been used to measure apparent terrain conductivity in the downstream portion of a watershed in which a waste disposal site is proposed. At that site, the pathways for waste migration in groundwater are controlled by subsurface channels. The identification and mapping of these subsurfaces channels constitutes an important contribution to the site characterization study. The channels are identified using isocurves of measured apparent conductivity. Two upstream channel branches are found to merge into a single downstream channel which constitutes the main drainage path out of the watershed. Electromagnetic terrain conductivity measurement methods are found to be inexpensive, rapid and efficient tools for subsurface investigations. Their contribution to site characterization studies and pathways analyses is particularly significant in planning of the monitoring program, the hydrogeological testing, and the modeling study. The results reported so far are very promising for use of the methods in several other applications related to the subgrade disposal of waste. 7 references, 5 figures
Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.
Auto-Mapping and Configuration Method of IEC 61850 Information Model Based on OPC UA
Directory of Open Access Journals (Sweden)
In-Jae Shin
2016-11-01
Full Text Available The open-platform communication (OPC unified architecture (UA (IEC62541 is introduced as a key technology for realizing a variety of smart grid (SG use cases enabling relevant automation and control tasks. The OPC UA can expand interoperability between power systems. The top-level SG management platform needs independent middleware to transparently manage the power information technology (IT systems, including the IEC 61850. To expand interoperability between the power system for a large number of stakeholders and various standards, this paper focuses on the IEC 61850 for the digital substation. In this paper, we propose the interconnection method to integrate communication with OPC UA and convert OPC UA AddressSpace using system configuration description language (SCL of IEC 61850. We implemented the mapping process for the verification of the interconnection method. The interconnection method in this paper can expand interoperability between power systems for OPC UA integration for various data structures in the smart grid.
Detecting chaos in particle accelerators through the frequency map analysis method.
Papaphilippou, Yannis
2014-06-01
The motion of beams in particle accelerators is dominated by a plethora of non-linear effects, which can enhance chaotic motion and limit their performance. The application of advanced non-linear dynamics methods for detecting and correcting these effects and thereby increasing the region of beam stability plays an essential role during the accelerator design phase but also their operation. After describing the nature of non-linear effects and their impact on performance parameters of different particle accelerator categories, the theory of non-linear particle motion is outlined. The recent developments on the methods employed for the analysis of chaotic beam motion are detailed. In particular, the ability of the frequency map analysis method to detect chaotic motion and guide the correction of non-linear effects is demonstrated in particle tracking simulations but also experimental data.
A novel method to design S-box based on chaotic map and genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Wang, Yong, E-mail: wangyong_cqupt@163.com [State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China); Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Wong, Kwok-Wo [Department of Electronic Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong (Hong Kong); Li, Changbing [Key Laboratory of Electronic Commerce and Logistics, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Li, Yang [Department of Automatic Control and Systems Engineering, The University of Sheffield, Mapping Street, S1 3DJ (United Kingdom)
2012-01-30
The substitution box (S-box) is an important component in block encryption algorithms. In this Letter, the problem of constructing S-box is transformed to a Traveling Salesman Problem and a method for designing S-box based on chaos and genetic algorithm is proposed. Since the proposed method makes full use of the traits of chaotic map and evolution process, stronger S-box is obtained. The results of performance test show that the presented S-box has good cryptographic properties, which justify that the proposed algorithm is effective in generating strong S-boxes. -- Highlights: ► The problem of constructing S-box is transformed to a Traveling Salesman Problem. ► We present a new method for designing S-box based on chaos and genetic algorithm. ► The proposed algorithm is effective in generating strong S-boxes.
Relating c 0 conformal field theories
International Nuclear Information System (INIS)
Guruswamy, S.; Ludwig, A.W.W.
1998-03-01
A 'canonical mapping' is established between the c = -1 system of bosonic ghosts at the c = 2 complex scalar theory and, a similar mapping between the c = -2 system of fermionic ghosts and the c = 1 Dirac theory. The existence of this mapping is suggested by the identity of the characters of the respective theories. The respective c 0 theories share the same space of states, whereas the spaces of conformal fields are different. Upon this mapping from their c 0) complex scalar and the Dirac theories inherit hidden nonlocal sl(2) symmetries. (author)
Cathcart, Laura Anne
This dissertation consists of two studies: 1) development and characterization of the Salient Map Analysis for Research and Teaching (SMART) method as a formative assessment tool and 2) a case study exploring how a paramedic instructor's beliefs about learners affect her utilization of the SMART method and vice versa. The first study explored: How can a novel concept map analysis method be designed as an effective formative assessment tool? The SMART method improves upon existing concept map analysis methods because it does not require hierarchically structured concept maps and it preserves the rich content of the maps instead of reducing each map down to a numerical score. The SMART method is performed by comparing a set of students' maps to each other and to an instructor's map. The resulting composite map depicts, in percentages and highlighted colors, the similarities and differences between all of the maps. Some advantages of the SMART method as a formative assessment tool include its ability to highlight changes across time, problematic or alternative conceptions, and patterns of student responses at a glance. Study two explored: How do a paramedic instructor's beliefs about students and learning affect---and become affected by---her use of the SMART method as a formative assessment tool? This case study of Angel, an expert paramedic instructor, begins to address a gap in the emergency medical services (EMS) education literature, which contains almost no research on teachers or pedagogy. Angel and I worked together as participant co-researchers (Heron & Reason, 1997) exploring the affordances of the SMART method. This study, based on those interactions with Angel, involved using open coding to identify themes (Strauss & Corbin, 1998) from Angel's views of students and use of the SMART method. Angel views learning as a sense-making process. She has a multi-faceted view of her students as novices and invests substantial time trying to understand their concept
COMPARING IMAGE-BASED METHODS FOR ASSESSING VISUAL CLUTTER IN GENERALIZED MAPS
Directory of Open Access Journals (Sweden)
G. Touya
2015-08-01
Full Text Available Map generalization abstracts and simplifies geographic information to derive maps at smaller scales. The automation of map generalization requires techniques to evaluate the global quality of a generalized map. The quality and legibility of a generalized map is related to the complexity of the map, or the amount of clutter in the map, i.e. the excessive amount of information and its disorganization. Computer vision research is highly interested in measuring clutter in images, and this paper proposes to compare some of the existing techniques from computer vision, applied to generalized maps evaluation. Four techniques from the literature are described and tested on a large set of maps, generalized at different scales: edge density, subband entropy, quad tree complexity, and segmentation clutter. The results are analyzed against several criteria related to generalized maps, the identification of cluttered areas, the preservation of the global amount of information, the handling of occlusions and overlaps, foreground vs background, and blank space reduction.
Conformational changes in glycine tri- and hexapeptide
DEFF Research Database (Denmark)
Yakubovich, Alexander V.; Solov'yov, Ilia; Solov'yov, Andrey V.
2006-01-01
conformations and calculated the energy barriers for transitions between them. Using a thermodynamic approach, we have estimated the times of the characteristic transitions between these conformations. The results of our calculations have been compared with those obtained by other theoretical methods...... also investigated the influence of the secondary structure of polypeptide chains on the formation of the potential energy landscape. This analysis has been performed for the sheet and the helix conformations of chains of six amino acids....
DEFF Research Database (Denmark)
Stærk, Dan; Norrby, Per-Ola; Jaroszewski, Jerzy W.
2001-01-01
H-1 NMR (400 MHz) spectra of the indole alkaloid dihydrocorynantheine recorded at room temperature show the presence of two conformers near coalescence. Low temperature H-1 NMR allowed characterization of the conformational equilibrium, which involves rotation of the 3-methoxypropenoate side chain...... bulk of the vinyl and the ethyl group. The conformational equilibria involving the side chain rotation as well as inversion of the bridgehead nitrogen in corynantheine and dihydrocorynantheine was studied by force-field (Amber(*) and MMFF) and ab initio (density-functional theory at the B3LYP/6-31G...
Complex analysis conformal inequalities and the Bieberbach conjecture
Kythe, Prem K
2015-01-01
Complex Analysis: Conformal Inequalities and the Bieberbach Conjecture discusses the mathematical analysis created around the Bieberbach conjecture, which is responsible for the development of many beautiful aspects of complex analysis, especially in the geometric-function theory of univalent functions. Assuming basic knowledge of complex analysis and differential equations, the book is suitable for graduate students engaged in analytical research on the topics and researchers working on related areas of complex analysis in one or more complex variables. The author first reviews the theory of analytic functions, univalent functions, and conformal mapping before covering various theorems related to the area principle and discussing Löwner theory. He then presents Schiffer’s variation method, the bounds for the fourth and higher-order coefficients, various subclasses of univalent functions, generalized convexity and the class of a-convex functions, and numerical estimates of the coefficient problem. The boo...
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.
. The data are revisited for objective mapping of the temperature fields using Stochastic Inverse Method. Hourly reciprocal transmissions were carried with time lag of 30 minutes between each direction. From the multipath arrival patterns, significant peaks...
Innovative method for optimizing Side-Scan Sonar mapping: The blind band unveiled
Pergent, Gérard; Monnier, Briac; Clabaut, Philippe; Gascon, Gilles; Pergent-Martini, Christine; Valette-Sansevin, Audrey
2017-07-01
Over the past few years, the mapping of Mediterranean marine habitats has become a priority for scientists, environment managers and stakeholders, in particular in order to comply with European directives (Water Framework Directive and Marine Strategy Framework Directive) and to implement legislation to ensure their conservation. Side-scan sonar (SSS) is recognised as one of the most effective tool for underwater mapping. However, interpretation of acoustic data (sonograms) requires extensive field calibration and the ground-truthing process remains essential. Several techniques are commonly used, with sampling methods involving grabs, scuba diving observations or Remotely Operated Vehicle (ROV) underwater video recordings. All these techniques are time consuming, expensive and only provide sporadic informations. In the present study, the possibility of coupling a camera with a SSS and acquiring underwater videos in a continuous way has been tested. During the 'PosidCorse' oceanographic survey carried out along the eastern coast of Corsica, optical and acoustic data were respectively obtained using a GoPro™ camera and a Klein 3000™ SSS. Thereby, five profiles were performed between 10 and 50 m depth, corresponding to more than 20 km of data acquisition. The vertical images recorded with the camera fixed under the SSS and positioned facing downwards provided photo mosaics of very good quality corresponding to the entire sonograms's blind band. From the photo mosaics, 94% of the different bottom types and main habitats have been identified; specific structures linked to hydrodynamics conditions, anthropic and biological activities have also been observed as well as the substrate on which the Posidonia oceanica meadow grows. The association between acoustic data and underwater videos has proved to be a non-destructive and cost-effective method for ground-truthing in marine habitats mapping. Nevertheless, in order to optimize the results over the next surveys
Weights of evidence method for landslide susceptibility mapping in Tangier, Morocco
Directory of Open Access Journals (Sweden)
Bousta Mahfoud
2018-01-01
Full Text Available Tangier region is known by a high density of mass movements which cause several human and economic losses. The goal of this paper is to assess the landslide susceptibility of Tangier using the Weight of Evidence method (WofE. The method is founded on the principle that an event (landslide is more likely to occur based on the relationship between the presence or absence of a predictive variable (predisposing factors and the occurrence of this event. The inventory, description and analysis of mass movements were prepared. Then the main factors governing their occurrence (lithology, fault, slope, elevation, exposure, drainage and land use were mapped before applying WofE. Finally, the ROC curves were established and the areas under curves (AUC were calculated to evaluate the degree of fit of the model and to choose the best landslide susceptibility zonation. The prediction accuracy was found to be 70%. Obtained susceptibility map shows that 60% of inventoried landslides are in the high to very high susceptibility zones, which is very satisfactory for the validation of the adopted model and the obtained results. These zones are mainly located in the N-E and E part of the Tangier region in the soft and fragile facies of the marls and clays of the Tangier unit, where landuse is characterized by dominance of arable and agricultural land (lack of forest cover. From a purely spatial point of view, the localization of these two classes of susceptibility is completely corresponding to the ground truth data, that is to say that all the environmental and anthropogenic conditions are in place for making this area prone to landslide hazards. The obtained map is a decision-making tool for presenting, comparing and discussing development and urban scenarios in Tangier. These results fall within the context of sustainable development and will help to mitigate the socio-economic impacts usually observed when landslides are triggered.
Dual conformal transformations of smooth holographic Wilson loops
Energy Technology Data Exchange (ETDEWEB)
Dekel, Amit [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)
2017-01-19
We study dual conformal transformations of minimal area surfaces in AdS{sub 5}×S{sup 5} corresponding to holographic smooth Wilson loops and some other related observables. To act with dual conformal transformations we map the string solutions to the dual space by means of T-duality, then we apply a conformal transformation and finally T-dualize back to the original space. The transformation maps between string solutions with different boundary contours. The boundary contours of the minimal surfaces are not mapped back to the AdS boundary, and the regularized area of the surface changes.
Non-conformable, partial and conformable transposition
DEFF Research Database (Denmark)
König, Thomas; Mäder, Lars Kai
2013-01-01
and the Commission regarding a directive’s outcome, play a much more strategic role than has to date acknowledged in the transposition literature. Whereas disagreement of a member state delays conformable transposition, it speeds up non-conformable transposition. Disagreement of the Commission only prolongs...... the transposition process. We therefore conclude that a stronger focus on an effective sanctioning mechanism is warranted for safeguarding compliance with directives....
Multi-methodical realisation of the new Austrian climate maps for 1971-2000
Auer, I.; Böhm, R.; Hiebl, J.; Reisenhofer, S.; Schöner, W.
2010-09-01
Constantly changing climate, the further development of geostatistical interpolation methods and the availability of a higher resolved digital elevation model gave reason for updating the most frequently demanded climate maps out of the Austrian digital climate atlas (ÖKLIM) from 1961-1990 to 1971-2000. The resulting 19 grids concern 30-year-means of air temperature (annual, January, July means) and derived indices (ice days, frost days, freeze-thaw days, summer days, hot days, heating degree days), precipitation (annual, winter half-year, summer half-year sums) and derived indices (days with precipitation, percentage of solid precipitation), snow (sum of fresh-fallen snow, snow cover duration, maximum snow depth) and sunshine (January, July absolute sunshine duration) parameters. For application in all branches of geosciences (e.g. climate variability and modelling, hydrology, biogeography, natural hazards) as well as for planning in all kinds of contexts (e.g. agriculture, tourism, generation of renewable energy, climate change adaption) such digital grids of standard climate information are greatly demanded and likely to gain even more importance in the near future. Data preparation was carried out with large effort. In order to avoid adverse border effects and to guarantee an equal state of quality across all parts of the country, the study region was extended beyond the national borders and stations from all neighbouring countries were requested. The final data collection includes between 319 (percentage of solid precipitation) and 1,399 (annual precipitation sum) records from eleven national and foreign institutes. To achieve a station density as high as possible, data gaps of up to five or ten years were filled considering the same parameter at reference stations or a related parameter station-wise. According to the climate parameter, different geostatistical interpolation methods were applied. Multiple regressions against elevation, longitude, latitude and
Mapping and predicting sinkholes by integration of remote sensing and spectroscopy methods
Goldshleger, N.; Basson, U.; Azaria, I.
2013-08-01
The Dead Sea coastal area is exposed to the destructive process of sinkhole collapse. The increase in sinkhole activity in the last two decades has been substantial, resulting from the continuous decrease in the Dead Sea's level, with more than 1,000 sinkholes developing as a result of upper layer collapse. Large sinkholes can reach 25 m in diameter. They are concentrated mainly in clusters in several dozens of sites with different characteristics. In this research, methods for mapping, monitoring and predicting sinkholes were developed using active and passive remote-sensing methods: field spectrometer, geophysical ground penetration radar (GPR) and a frequency domain electromagnetic instrument (FDEM). The research was conducted in three stages: 1) literature review and data collection; 2) mapping regions abundant with sinkholes in various stages and regions vulnerable to sinkholes; 3) analyzing the data and translating it into cognitive and accessible scientific information. Field spectrometry enabled a comparison between the spectral signatures of soil samples collected near active or progressing sinkholes, and those collected in regions with no visual sign of sinkhole occurrence. FDEM and GPR investigations showed that electrical conductivity and soil moisture are higher in regions affected by sinkholes. Measurements taken at different time points over several seasons allowed monitoring the progress of an 'embryonic' sinkhole.
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
Institute of Scientific and Technical Information of China (English)
YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang
2001-01-01
Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD.
Spin dependence of rotational damping by the rotational plane mapping method
Energy Technology Data Exchange (ETDEWEB)
Leoni, S; Bracco, A; Million, B [Milan Univ. (Italy). Ist. di Fisica; Herskind, B; Dossing, T; Rasmussen, P [Niels Bohr Inst., Copenhagen (Denmark); Bergstrom, M; Brockstedt, A; Carlsson, H; Ekstrom, P; Nordlund, A; Ryde, H [Lund Univ. (Sweden). Dept. of Physics; Ingebretsen, F; Tjom, P O [Oslo Univ. (Norway); Lonnroth, T [Aabo Akademi, Turku (Finland). Dept. of Physics
1992-08-01
In the study of deformed nuclei by gamma spectroscopy, the large quadrupole transition strength known from rotational bands at high excitation energy may be distributed over all final states of a given parity within an interval defined as the rotational damping width {Gamma}{sub rot} The method of rotational plane mapping extracts a value of {Gamma}{sub rot} from the width of valleys in certain planes in the grid plots of triple gamma coincidence data sets. The method was applied to a high spin triple data set on {sup 162,163}Tm taken with NORDBALL at the tandem accelerator of the Niels Bohr Institute, and formed in the reaction {sup 37}Cl + {sup 130}Te. The value {Gamma}{sub rot} = 85 keV was obtained. Generally, experimental values seem to be lower than theoretical predictions, although the only calculation made was for {sup 168}Yb. 6 refs., 3 figs.
Global river flood hazard maps: hydraulic modelling methods and appropriate uses
Townend, Samuel; Smith, Helen; Molloy, James
2014-05-01
Flood hazard is not well understood or documented in many parts of the world. Consequently, the (re-)insurance sector now needs to better understand where the potential for considerable river flooding aligns with significant exposure. For example, international manufacturing companies are often attracted to countries with emerging economies, meaning that events such as the 2011 Thailand floods have resulted in many multinational businesses with assets in these regions incurring large, unexpected losses. This contribution addresses and critically evaluates the hydraulic methods employed to develop a consistent global scale set of river flood hazard maps, used to fill the knowledge gap outlined above. The basis of the modelling approach is an innovative, bespoke 1D/2D hydraulic model (RFlow) which has been used to model a global river network of over 5.3 million kilometres. Estimated flood peaks at each of these model nodes are determined using an empirically based rainfall-runoff approach linking design rainfall to design river flood magnitudes. The hydraulic model is used to determine extents and depths of floodplain inundation following river bank overflow. From this, deterministic flood hazard maps are calculated for several design return periods between 20-years and 1,500-years. Firstly, we will discuss the rationale behind the appropriate hydraulic modelling methods and inputs chosen to produce a consistent global scaled river flood hazard map. This will highlight how a model designed to work with global datasets can be more favourable for hydraulic modelling at the global scale and why using innovative techniques customised for broad scale use are preferable to modifying existing hydraulic models. Similarly, the advantages and disadvantages of both 1D and 2D modelling will be explored and balanced against the time, computer and human resources available, particularly when using a Digital Surface Model at 30m resolution. Finally, we will suggest some
Logarithmic conformal field theory
Gainutdinov, Azat; Ridout, David; Runkel, Ingo
2013-12-01
product theory. Morin-Duchesne and Saint-Aubin have contributed a research article describing their recent characterisation of when the transfer matrix of a periodic loop model fails to be diagonalisable. This generalises their recent result for non-periodic loop models and provides rigorous methods to justify what has often been assumed in the lattice approach to logarithmic CFT. The philosophy here is one of analysing lattice models with finite size, aiming to demonstrate that non-diagonalisability survives the scaling limit. This is extremely difficult in general (see also the review by Gainutdinov et al ), so it is remarkable that it is even possible to demonstrate this at any level of generality. Quella and Schomerus have prepared an extensive review covering their longstanding collaboration on the logarithmic nature of conformal sigma models on Lie supergroups and their cosets with applications to string theory and AdS/CFT. Beginning with a very welcome overview of Lie superalgebras and their representations, harmonic analysis and cohomological reduction, they then apply these mathematical tools to WZW models on type I Lie supergroups and their homogeneous subspaces. Along the way, deformations are discussed and potential dualities in the corresponding string theories are described. Ruelle provides an exhaustive account of his substantial contributions to the study of the abelian sandpile model. This is a statistical model which has the surprising feature that many correlation functions can be computed exactly, in the bulk and on the boundary, even though the spectrum of conformal weights is largely unknown. Nevertheless, there is much evidence suggesting that its scaling limit is described by an, as yet unknown, c = -2 logarithmic CFT. Semikhatov and Tipunin present their very recent results regarding the construction of logarithmic chiral W-algebra extensions of a fractional level algebra. The idea is that these algebras are the centralisers of a rank-two Nichols
Energy Technology Data Exchange (ETDEWEB)
Gilani, Syed Irtiza Ali
2008-09-15
Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T{sub 1} relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase
International Nuclear Information System (INIS)
Gilani, Syed Irtiza Ali
2008-09-01
Recent technological developments in the field of magnetic resonance imaging have resulted in advanced techniques that can reduce the total time to acquire images. For applications such as relaxation time mapping, which enables improved visualisation of in vivo structures, rapid imaging techniques are highly desirable. TAPIR is a Look- Locker-based sequence for high-resolution, multislice T 1 relaxation time mapping. Despite the high accuracy and precision of TAPIR, an improvement in the k-space sampling trajectory is desired to acquire data in clinically acceptable times. In this thesis, a new trajectory, termed line-sharing, is introduced for TAPIR that can potentially reduce the acquisition time by 40 %. Additionally, the line-sharing method was compared with the GRAPPA parallel imaging method. These methods were employed to reconstruct time-point images from the data acquired on a 4T high-field MR research scanner. Multislice, multipoint in vivo results obtained using these methods are presented. Despite improvement in acquisition speed, through line-sharing, for example, motion remains a problem and artefact-free data cannot always be obtained. Therefore, in this thesis, a rapid technique is introduced to estimate in-plane motion. The presented technique is based on calculating the in-plane motion parameters, i.e., translation and rotation, by registering the low-resolution MR images. The rotation estimation method is based on the pseudo-polar FFT, where the Fourier domain is composed of frequencies that reside in an oversampled set of non-angularly, equispaced points. The essence of the method is that unlike other Fourier-based registration schemes, the employed approach does not require any interpolation to calculate the pseudo-polar FFT grid coordinates. Translation parameters are estimated by the phase correlation method. However, instead of two-dimensional analysis of the phase correlation matrix, a low complexity subspace identification of the phase
Open conformal systems and perturbations of transfer operators
Pollicott, Mark
2017-01-01
The focus of this book is on open conformal dynamical systems corresponding to the escape of a point through an open Euclidean ball. The ultimate goal is to understand the asymptotic behavior of the escape rate as the radius of the ball tends to zero. In the case of hyperbolic conformal systems this has been addressed by various authors. The conformal maps considered in this book are far more general, and the analysis correspondingly more involved. The asymptotic existence of escape rates is proved and they are calculated in the context of (finite or infinite) countable alphabets, uniformly contracting conformal graph-directed Markov systems, and in particular, conformal countable alphabet iterated function systems. These results have direct applications to interval maps, meromorphic maps and rational functions. Towards this goal the authors develop, on a purely symbolic level, a theory of singular perturbations of Perron--Frobenius (transfer) operators associated with countable alphabet subshifts of finite t...
Comparison of Data Fusion Methods Using Crowdsourced Data in Creating a Hybrid Forest Cover Map
Directory of Open Access Journals (Sweden)
Myroslava Lesiv
2016-03-01
Full Text Available Data fusion represents a powerful way of integrating individual sources of information to produce a better output than could be achieved by any of the individual sources on their own. This paper focuses on the data fusion of different land cover products derived from remote sensing. In the past, many different methods have been applied, without regard to their relative merit. In this study, we compared some of the most commonly-used methods to develop a hybrid forest cover map by combining available land cover/forest products and crowdsourced data on forest cover obtained through the Geo-Wiki project. The methods include: nearest neighbour, naive Bayes, logistic regression and geographically-weighted logistic regression (GWR, as well as classification and regression trees (CART. We ran the comparison experiments using two data types: presence/absence of forest in a grid cell; percentage of forest cover in a grid cell. In general, there was little difference between the methods. However, GWR was found to perform better than the other tested methods in areas with high disagreement between the inputs.
A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data
XU, R.; Jia, G.
2012-12-01
Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China
International Nuclear Information System (INIS)
Farace, Paolo; Antolini, Renzo; Pontalti, Rolando; Cristoforetti, Luca; Scarpa, Marina
1997-01-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning. (author)
Energy Technology Data Exchange (ETDEWEB)
Farace, Paolo; Antolini, Renzo [CMBM-ITC, Centro Materiali e Biofisica Medica, 38050 Povo-Trento (Italy); Dipartimento di Fisica and INFM, Universita di Trento, 38050 Povo-Trento (Italy); Pontalti, Rolando; Cristoforetti, Luca [CMBM-ITC, Centro Materiali e Biofisica Medica, 38050 Povo-Trento (Italy); Scarpa, Marina [Dipartimento di Fisica and INFM, Universita di Trento, 38050 Povo-Trento (Italy)
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning. (author)
Farace, P; Pontalti, R; Cristoforetti, L; Antolini, R; Scarpa, M
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning.
Energy Technology Data Exchange (ETDEWEB)
Farace, Paolo; Antolini, Renzo [CMBM-ITC, Centro Materiali e Biofisica Medica, 38050 Povo-Trento (Italy); Dipartimento di Fisica and INFM, Universita di Trento, 38050 Povo-Trento (Italy); Pontalti, Rolando; Cristoforetti, Luca [CMBM-ITC, Centro Materiali e Biofisica Medica, 38050 Povo-Trento (Italy); Scarpa, Marina [Dipartimento di Fisica and INFM, Universita di Trento, 38050 Povo-Trento (Italy)
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning. (author)
Directory of Open Access Journals (Sweden)
Yi Li
2015-07-01
Full Text Available The efficiency of genome-wide association analysis (GWAS depends on power of detection for quantitative trait loci (QTL and precision for QTL mapping. In this study, three different strategies for GWAS were applied to detect QTL for carcass quality traits in the Korean cattle, Hanwoo; a linkage disequilibrium single locus regression method (LDRM, a combined linkage and linkage disequilibrium analysis (LDLA and a BayesCπ approach. The phenotypes of 486 steers were collected for weaning weight (WWT, yearling weight (YWT, carcass weight (CWT, backfat thickness (BFT, longissimus dorsi muscle area, and marbling score (Marb. Also the genotype data for the steers and their sires were scored with the Illumina bovine 50K single nucleotide polymorphism (SNP chips. For the two former GWAS methods, threshold values were set at false discovery rate <0.01 on a chromosome-wide level, while a cut-off threshold value was set in the latter model, such that the top five windows, each of which comprised 10 adjacent SNPs, were chosen with significant variation for the phenotype. Four major additive QTL from these three methods had high concordance found in 64.1 to 64.9Mb for Bos taurus autosome (BTA 7 for WWT, 24.3 to 25.4Mb for BTA14 for CWT, 0.5 to 1.5Mb for BTA6 for BFT and 26.3 to 33.4Mb for BTA29 for BFT. Several candidate genes (i.e. glutamate receptor, ionotropic, ampa 1 [GRIA1], family with sequence similarity 110, member B [FAM110B], and thymocyte selection-associated high mobility group box [TOX] may be identified close to these QTL. Our result suggests that the use of different linkage disequilibrium mapping approaches can provide more reliable chromosome regions to further pinpoint DNA makers or causative genes in these regions.
An improved method for Multipath Hemispherical Map (MHM) based on Trend Surface Analysis
Wang, Zhiren; Chen, Wen; Dong, Danan; Yu, Chao
2017-04-01
Among various approaches developed for detecting the multipath effect in high-accuracy GNSS positioning, Only MHM (Multipath Hemispherical Map) and SF (Sidereal Filtering) can be implemented to real-time GNSS data processing. SF is based on the time repeatability of satellites which just suitable for static environment, while the spatiotemporal repeatability-based MHM is applicable not only for static environment but also for dynamic carriers with static multipath environment such as ships and airplanes, and utilizes much smaller number of parameters than ASF. However, the MHM method also has certain defects. Since the MHM take the mean of residuals from the grid as the filter value, it is more suitable when the multipath regime is medium to low frequency. Now existing research data indicate that the newly advanced Sidereal Filtering (ASF) method perform better with high frequency multipath reduction than MHM by contrast. To solve the above problem and improve MHM's performance on high frequency multipath, we combined binary trend surface analysis method with original MHM model to effectively analyze particular spatial distribution and variation trends of multipath effect. We computed trend surfaces of the residuals within a grid by least-square procedures, and chose the best results through the moderate successive test. The enhanced MHM grid was constructed from a set of coefficients of the fitted equation instead of mean value. According to the analysis of the actual observation, the improved MHM model shows positive effect on high frequency multipath reduction, and significantly reduced the root mean square (RMS) value of the carrier residuals. Keywords: Trend Surface Analysis; Multipath Hemispherical Map; high frequency multipath effect
A method for mapping corn using the US Geological Survey 1992 National Land Cover Dataset
Maxwell, S.K.; Nuckols, J.R.; Ward, M.H.
2006-01-01
Long-term exposure to elevated nitrate levels in community drinking water supplies has been associated with an elevated risk of several cancers including non-Hodgkin's lymphoma, colon cancer, and bladder cancer. To estimate human exposure to nitrate, specific crop type information is needed as fertilizer application rates vary widely by crop type. Corn requires the highest application of nitrogen fertilizer of crops grown in the Midwest US. We developed a method to refine the US Geological Survey National Land Cover Dataset (NLCD) (including map and original Landsat images) to distinguish corn from other crops. Overall average agreement between the resulting corn and other row crops class and ground reference data was 0.79 kappa coefficient with individual Landsat images ranging from 0.46 to 0.93 kappa. The highest accuracies occurred in Regions where corn was the single dominant crop (greater than 80.0%) and the crop vegetation conditions at the time of image acquisition were optimum for separation of corn from all other crops. Factors that resulted in lower accuracies included the accuracy of the NLCD map, accuracy of corn areal estimates, crop mixture, crop condition at the time of Landsat overpass, and Landsat scene anomalies.
A Method to Analyze the Potential of Optical Remote Sensing for Benthic Habitat Mapping
Directory of Open Access Journals (Sweden)
Rodrigo A. Garcia
2015-10-01
Full Text Available Quantifying the number and type of benthic classes that are able to be spectrally identified in shallow water remote sensing is important in understanding its potential for habitat mapping. Factors that impact the effectiveness of shallow water habitat mapping include water column turbidity, depth, sensor and environmental noise, spectral resolution of the sensor and spectral variability of the benthic classes. In this paper, we present a simple hierarchical clustering method coupled with a shallow water forward model to generate water-column specific spectral libraries. This technique requires no prior decision on the number of classes to output: the resultant classes are optically separable above the spectral noise introduced by the sensor, image based radiometric corrections, the benthos’ natural spectral variability and the attenuating properties of a variable water column at depth. The modeling reveals the effect reducing the spectral resolution has on the number and type of classes that are optically distinct. We illustrate the potential of this clustering algorithm in an analysis of the conditions, including clustering accuracy, sensor spectral resolution and water column optical properties and depth that enabled the spectral distinction of the seagrass Amphibolis antartica from benthic algae.
A new capture fraction method to map how pumpage affects surface water flow
Leake, S.A.; Reeves, H.W.; Dickinson, J.E.
2010-01-01
All groundwater pumped is balanced by removal of water somewhere, initially from storage in the aquifer and later from capture in the form of increase in recharge and decrease in discharge. Capture that results in a loss of water in streams, rivers, and wetlands now is a concern in many parts of the United States. Hydrologists commonly use analytical and numerical approaches to study temporal variations in sources of water to wells for select points of interest. Much can be learned about coupled surface/groundwater systems, however, by looking at the spatial distribution of theoretical capture for select times of interest. Development of maps of capture requires (1) a reasonably well-constructed transient or steady state model of an aquifer with head-dependent flow boundaries representing surface water features or evapotranspiration and (2) an automated procedure to run the model repeatedly and extract results, each time with a well in a different location. This paper presents new methods for simulating and mapping capture using three-dimensional groundwater flow models and presents examples from Arizona, Oregon, and Michigan. Journal compilation ?? 2010 National Ground Water Association. No claim to original US government works.
The hydrocarbon accumulations mapping in crystalline rocks by mobile geophysical methods
Nesterenko, A.
2013-05-01
Sedimentary-migration origin theory of hydrocarbons dominates nowadays. However, a significant amount of hydrocarbon deposits were discovered in the crystalline rocks, which corroborates the theory of non-organic origin of hydrocarbons. During the solving of problems of oil and gas exploration in crystalline rocks and arrays so-called "direct" methods can be used. These methods include geoelectric methods of forming short-pulsed electromagnetic field (FSPEF) and vertical electric-resonance sounding (VERS) (FSPEF-VERS express-technology). Use of remote Earth sounding (RES) methods is also actual. These mobile technologies are extensively used during the exploration of hydrocarbon accumulations in crystalline rocks, including those within the Ukrainian crystalline shield. The results of explorations Four anomalous geoelectric zones of "gas condensate reservoir" type were quickly revealed as a result of reconnaissance prospecting works (Fig. 1). DTA "Obukhovychi". Anomaly was traced over a distance of 4 km. Approximate area is 12.0 km2. DTA"Korolevskaya". Preliminary established size of anomalous zone is 10.0 km2. The anomalous polarized layers of gas and gas-condensate type were determined. DTA "Olizarovskaya". Approximate size of anomaly is about 56.0 km2. This anomaly is the largest and the most intense. DTA "Druzhba". Preliminary estimated size of anomaly is 16.0 km2. Conclusions Long experience of a successful application of non-classical geoelectric methods for the solving of variety of practical tasks allow one to state their contribution to the development of a new paradigm of geophysical researches. Simultaneous usage of the remote sensing data processing and interpretation method and FSPEF and VERS technologies can essentially optimize and speed up geophysical work. References 1. S.P. Levashov. Detection and mapping of anomalies of "hydrocarbon deposit" type in the fault zones of crystalline arrays by geoelectric methods. / S.P. Levashov, N.A. Yakymchuk, I
Entanglement evolution across a conformal interface
Wen, Xueda; Wang, Yuxuan; Ryu, Shinsei
2018-05-01
For two-dimensional conformal field theories (CFTs) in the ground state, it is known that a conformal interface along the entanglement cut can suppress the entanglement entropy from to , where L is the length of the subsystem A, and is the effective central charge which depends on the transmission property of the conformal interface. In this work, by making use of conformal mappings, we show that a conformal interface has the same effect on entanglement evolution in non-equilibrium cases, including global, local and certain inhomogeneous quantum quenches. I.e. a conformal interface suppresses the time evolution of entanglement entropy by effectively replacing the central charge c with , where is exactly the same as that in the ground state case. We confirm this conclusion by a numerical study on a critical fermion chain. Furthermore, based on the quasi-particle picture, we conjecture that this conclusion holds for an arbitrary quantum quench in CFTs, as long as the initial state can be described by a regularized conformal boundary state.
Conformation Generation: The State of the Art.
Hawkins, Paul C D
2017-08-28
The generation of conformations for small molecules is a problem of continuing interest in cheminformatics and computational drug discovery. This review will present an overview of methods used to sample conformational space, focusing on those methods designed for organic molecules commonly of interest in drug discovery. Different approaches to both the sampling of conformational space and the scoring of conformational stability will be compared and contrasted, with an emphasis on those methods suitable for conformer sampling of large numbers of drug-like molecules. Particular attention will be devoted to the appropriate utilization of information from experimental solid-state structures in validating and evaluating the performance of these tools. The review will conclude with some areas worthy of further investigation.
The butane condensed matter conformational problem
Weber, A.C.J.; de Lange, C.A.; Meerts, W.L.; Burnell, E.E.
2010-01-01
From the dipolar couplings of orientationally ordered n-butane obtained by NMR spectroscopy we have calculated conformer probabilities using the modified Chord (Cd) and Size-and-Shape (CI) models to estimate the conformational dependence of the order matrix. All calculation methods make use of
Directory of Open Access Journals (Sweden)
Jiaye Li
2018-04-01
Full Text Available River discharge, which represents the accumulation of surface water flowing into rivers and ultimately into the ocean or other water bodies, may have great impacts on water quality and the living organisms in rivers. However, the global knowledge of river discharge is still poor and worth exploring. This study proposes an efficient method for mapping high-resolution global river discharge based on the algorithms of drainage network extraction. Using the existing global runoff map and digital elevation model (DEM data as inputs, this method consists of three steps. First, the pixels of the runoff map and the DEM data are resampled into the same resolution (i.e., 0.01-degree. Second, the flow direction of each pixel of the DEM data (identified by the optimal flow path method used in drainage network extraction is determined and then applied to the corresponding pixel of the runoff map. Third, the river discharge of each pixel of the runoff map is calculated by summing the runoffs of all the pixels in the upstream of this pixel, similar to the upslope area accumulation step in drainage network extraction. Finally, a 0.01-degree global map of the mean annual river discharge is obtained. Moreover, a 0.5-degree global map of the mean annual river discharge is produced to display the results with a more intuitive perception. Compared against the existing global river discharge databases, the 0.01-degree map is of a generally high accuracy for the selected river basins, especially for the Amazon River basin with the lowest relative error (RE of 0.3% and the Yangtze River basin within the RE range of ±6.0%. However, it is noted that the results of the Congo and Zambezi River basins are not satisfactory, with RE values over 90%, and it is inferred that there may be some accuracy problems with the runoff map in these river basins.
Directory of Open Access Journals (Sweden)
Maryam Shirmohammad
2008-06-01
Full Text Available Introduction: The advent of dual-modality PET/CT scanners has revolutionized clinical oncology by improving lesion localization and facilitating treatment planning for radiotherapy. In addition, the use of CT images for CT-based attenuation correction (CTAC decreases the overall scanning time and creates a noise-free attenuation map (6map. CTAC methods include scaling, segmentation, hybrid scaling/segmentation, bilinear and dual energy methods. All CTAC methods require the transformation of CT Hounsfield units (HU to linear attenuation coefficients (LAC at 511 keV. The aim of this study is to compare the results of implementing different methods of energy mapping in PET/CT scanners. Materials and Methods: This study was conducted in 2 phases, the first phase in a phantom and the second one on patient data. To perform the first phase, a cylindrical phantom with different concentrations of K2HPO4 inserts was CT scanned and energy mapping methods were implemented on it. For performing the second phase, different energy mapping methods were implemented on several clinical studies and compared to the transmission (TX image derived using Ga-68 radionuclide source acquired on the GE Discovery LS PET/CT scanner. Results: An ROI analysis was performed on different positions of the resultant 6maps and the average 6value of each ROI was compared to the reference value. The results of the 6maps obtained for 511 keV compared to the theoretical values showed that in the phantom for low concentrations of K 2 HPO 4 all these methods produce 511 keV attenuation maps with small relative difference compared to gold standard. The relative difference for scaling, segmentation, hybrid, bilinear and dual energy methods was 4.92, 3.21, 4.43, 2.24 and 2.29%, respectively. Although for high concentration
Livingston, Karina; Padilla, Mark; Scott, Derrick; Colón-Burgos, José Félix; Reyes, Armando Matiz; Varas-Díaz, Nelson
2016-01-01
This paper focuses on a mixed-method approach to quantifying qualitative data from the results of an ongoing NIDA-funded ethnographic study entitled “Migration, Tourism, and the HIV/Drug-Use Syndemic in the Dominican Republic”. This project represents the first large-scale mixed method study to identify social, structural, environmental, and demographic factors that may contribute to ecologies of health vulnerability within the Caribbean tourism zones. Our research has identified deportation history as a critical factor contributing to vulnerability to HIV, drugs, mental health problems, and other health conditions. Therefore, understanding the movements of our participants became a vital aspect of this research. This paper describes how we went about translating 37 interviews into visual geographic representations. These methods help develop possible strategies for confronting HIV/AIDS and problematic substance use by examining the ways that these epidemics are shaped by the realities of people’s labor migration and the spaces they inhabit. Our methods for mapping this qualitative data contribute to the ongoing, broadening capabilities of using GIS in social science research. A key contribution of this work is its integration of different methodologies from various disciplines to help better understand complex social problems. PMID:27656039
Livingston, Karina; Padilla, Mark; Scott, Derrick; Colón-Burgos, José Félix; Reyes, Armando Matiz; Varas-Díaz, Nelson
This paper focuses on a mixed-method approach to quantifying qualitative data from the results of an ongoing NIDA-funded ethnographic study entitled "Migration, Tourism, and the HIV/Drug-Use Syndemic in the Dominican Republic". This project represents the first large-scale mixed method study to identify social, structural, environmental, and demographic factors that may contribute to ecologies of health vulnerability within the Caribbean tourism zones. Our research has identified deportation history as a critical factor contributing to vulnerability to HIV, drugs, mental health problems, and other health conditions. Therefore, understanding the movements of our participants became a vital aspect of this research. This paper describes how we went about translating 37 interviews into visual geographic representations. These methods help develop possible strategies for confronting HIV/AIDS and problematic substance use by examining the ways that these epidemics are shaped by the realities of people's labor migration and the spaces they inhabit. Our methods for mapping this qualitative data contribute to the ongoing, broadening capabilities of using GIS in social science research. A key contribution of this work is its integration of different methodologies from various disciplines to help better understand complex social problems.
Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness
Energy Technology Data Exchange (ETDEWEB)
Chelouche, Doron; Pozo-Nuñez, Francisco [Department of Physics, Faculty of Natural Sciences, University of Haifa, Haifa 3498838 (Israel); Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il [Department of Geosciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 6997801 (Israel)
2017-08-01
A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discrete correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.
A matrix-inversion method for gamma-source mapping from gamma-count data - 59082
International Nuclear Information System (INIS)
Bull, Richard K.; Adsley, Ian; Burgess, Claire
2012-01-01
Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)
Automatic and efficient methods applied to the binarization of a subway map
Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan
2015-12-01
The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.
Sembiring, N.; Nasution, A. H.
2018-02-01
Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.
A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching
Liang, Ru-Ze; Xie, Wei; Li, Weizhi; Wang, Hongqi; Wang, Jim Jing-Yan; Taylor, Lisa
2017-01-01
In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.
Baleine, Erwan; Sheldon, Danny M
2014-06-10
Method and system for calibrating a thermal radiance map of a turbine component in a combustion environment. At least one spot (18) of material is disposed on a surface of the component. An infrared (IR) imager (14) is arranged so that the spot is within a field of view of the imager to acquire imaging data of the spot. A processor (30) is configured to process the imaging data to generate a sequence of images as a temperature of the combustion environment is increased. A monitor (42, 44) may be coupled to the processor to monitor the sequence of images of to determine an occurrence of a physical change of the spot as the temperature is increased. A calibration module (46) may be configured to assign a first temperature value to the surface of the turbine component when the occurrence of the physical change of the spot is determined.
Energy Technology Data Exchange (ETDEWEB)
Han, Byeong Hee; Yoon, Dong Jin; Park, Chun Soo [Korea Research Institute of Standards and Science, Center for Safety Measurement, Daejeon (Korea, Republic of); Lee, Young Shin [Dept. of Mechanical Design Engineering, Chungnam National University, Daejeon (Korea, Republic of)
2016-10-15
Acoustic emission (AE) is one of the most powerful techniques for detecting damages and identify damage location during operations. However, in case of the source location technique, there is some limitation in conventional AE technology, because it strongly depends on wave speed in the corresponding structures having heterogeneous composite materials. A compressed natural gas(CNG) pressure vessel is usually made of carbon fiber composite outside of vessel for the purpose of strengthening. In this type of composite material, locating impact damage sources exactly using conventional time arrival method is difficult. To overcome this limitation, this study applied the previously developed Contour D/B map technique to four types of CNG storage tanks to identify the source location of damages caused by external shock. The results of the identification of the source location for different types were compared.
Segmentation of Natural Gas Customers in Industrial Sector Using Self-Organizing Map (SOM) Method
Masbar Rus, A. M.; Pramudita, R.; Surjandari, I.
2018-03-01
The usage of the natural gas which is non-renewable energy, needs to be more efficient. Therefore, customer segmentation becomes necessary to set up a marketing strategy to be right on target or to determine an appropriate fee. This research was conducted at PT PGN using one of data mining method, i.e. Self-Organizing Map (SOM). The clustering process is based on the characteristic of its customers as a reference to create the customer segmentation of natural gas customers. The input variables of this research are variable of area, type of customer, the industrial sector, the average usage, standard deviation of the usage, and the total deviation. As a result, 37 cluster and 9 segment from 838 customer data are formed. These 9 segments then employed to illustrate the general characteristic of the natural gas customer of PT PGN.
A Novel Transfer Learning Method Based on Common Space Mapping and Weighted Domain Matching
Liang, Ru-Ze
2017-01-17
In this paper, we propose a novel learning framework for the problem of domain transfer learning. We map the data of two domains to one single common space, and learn a classifier in this common space. Then we adapt the common classifier to the two domains by adding two adaptive functions to it respectively. In the common space, the target domain data points are weighted and matched to the target domain in term of distributions. The weighting terms of source domain data points and the target domain classification responses are also regularized by the local reconstruction coefficients. The novel transfer learning framework is evaluated over some benchmark cross-domain data sets, and it outperforms the existing state-of-the-art transfer learning methods.
Weigel, A. M.; Griffin, R.; Gallagher, D.
2015-12-01
Storm surge has enough destructive power to damage buildings and infrastructure, erode beaches, and threaten human life across large geographic areas, hence posing the greatest threat of all the hurricane hazards. The United States Gulf of Mexico has proven vulnerable to hurricanes as it has been hit by some of the most destructive hurricanes on record. With projected rises in sea level and increases in hurricane activity, there is a need to better understand the associated risks for disaster mitigation, preparedness, and response. GIS has become a critical tool in enhancing disaster planning, risk assessment, and emergency response by communicating spatial information through a multi-layer approach. However, there is a need for a near real-time method of identifying areas with a high risk of being impacted by storm surge. Research was conducted alongside Baron, a private industry weather enterprise, to facilitate automated modeling and visualization of storm surge inundation and vulnerability on a near real-time basis. This research successfully automated current flood hazard mapping techniques using a GIS framework written in a Python programming environment, and displayed resulting data through an Application Program Interface (API). Data used for this methodology included high resolution topography, NOAA Probabilistic Surge model outputs parsed from Rich Site Summary (RSS) feeds, and the NOAA Census tract level Social Vulnerability Index (SoVI). The development process required extensive data processing and management to provide high resolution visualizations of potential flooding and population vulnerability in a timely manner. The accuracy of the developed methodology was assessed using Hurricane Isaac as a case study, which through a USGS and NOAA partnership, contained ample data for statistical analysis. This research successfully created a fully automated, near real-time method for mapping high resolution storm surge inundation and vulnerability for the
A Method of Mapping Burned Area Using Chinese FengYun-3 MERSI Satellite Data
Shan, T.
2017-12-01
Wildfire is a naturally reoccurring global phenomenon which has environmental and ecological consequences such as effects on the global carbon budget, changes to the global carbon cycle and disruption to ecosystem succession. The information of burned area is significant for post disaster assessment, ecosystems protection and restoration. The Medium Resolution Spectral Imager (MERSI) onboard FENGYUN-3C (FY-3C) has shown good ability for fire detection and monitoring but lacks recognition among researchers. In this study, an automated burned area mapping algorithm was proposed based on FY-3C MERSI data. The algorithm is generally divided into two phases: 1) selection of training pixels based on 1000-m resolution MERSI data, which offers more spectral information through the use of more vegetation indices; and 2) classification: first the region growing method is applied to 1000-m MERSI data to calculate the core burned area and then the same classification method is applied to the 250-m MERSI data set by using the core burned area as a seed to obtain results at a finer spatial resolution. An evaluation of the performance of the algorithm was carried out at two study sites in America and Canada. The accuracy assessment and validation were made by comparing our results with reference results derived from Landsat OLI data. The result has a high kappa coefficient and the lower commission error, indicating that this algorithm can improve the burned area mapping accuracy at the two study sites. It may then be possible to use MERSI and other data to fill the gaps in the imaging of burned areas in the future.
Automated artery-venous classification of retinal blood vessels based on structural mapping method
Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.
2012-03-01
Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.
Fabrication challenges associated with conformal optics
Schaefer, John; Eichholtz, Richard A.; Sulzbach, Frank C.
2001-09-01
A conformal optic is typically an optical window that conforms smoothly to the external shape of a system platform to improve aerodynamics. Conformal optics can be on-axis, such as an ogive missile dome, or off-axis, such as in a free form airplane wing. A common example of conformal optics is the automotive head light window that conforms to the body of the car aerodynamics and aesthetics. The unusual shape of conformal optics creates tremendous challenges for design, manufacturing, and testing. This paper will discuss fabrication methods that have been successfully demonstrated to produce conformal missile domes and associated wavefront corrector elements. It will identify challenges foreseen with more complex free-form configurations. Work presented in this paper was directed by the Precision Conformal Optics Consortium (PCOT). PCOT is comprised of both industrial and academic members who teamed to develop and demonstrate conformal optical systems suitable for insertion into future military programs. The consortium was funded under DARPA agreement number MDA972-96-9-08000.
Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.
1996-01-01
We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.
KINETIC TOMOGRAPHY. I. A METHOD FOR MAPPING THE MILKY WAY’S INTERSTELLAR MEDIUM IN FOUR DIMENSIONS
Energy Technology Data Exchange (ETDEWEB)
Tchernyshyov, Kirill [The Johns Hopkins University (United States); Peek, J. E. G. [Space Telescope Science Institute (United States)
2017-01-01
We have developed a method for deriving the distribution of the Milky Way’s interstellar medium as a function of longitude, latitude, distance, and line-of-sight velocity. This method takes as input maps of reddening as a function of longitude, latitude, distance, and maps of line emission as a function of longitude, latitude, and line-of-sight velocity. We have applied this method to data sets covering much of the Galactic plane. The output of this method correctly reproduces the line-of-sight velocities of high-mass star-forming regions with known distances from Reid et al. and qualitatively agrees with results from the Milky Way kinematics literature. These maps will be useful for measuring flows of gas around the Milky Way’s spiral arms and into and out of giant molecular clouds.
Application of geoelectric methods for man-caused gas deposit mapping and monitoring
Yakymchuk, M. A.; Levashov, S. P.; Korchagin, I. N.; Syniuk, B. B.
2009-04-01
The rather new application of original geoelectric methods of forming of short-pulsed electromagnetic field (FSPEF) and vertical electric-resonance sounding (VERS) (FSPEF-VERS technology) (Levashov et al., 2003; 2004) is discussed. In 2008 the FSPEF-VERS methods were used for ascertaining the reasons of serious man-caused accident on gas field. The emission of water with gas has occurred near an operational well on one gas field. The assumption was discussed, that some part of gas from producing horizons has got into the upper horizons, in aquiferous stratum layers. It promoted creation of superfluous pressure in aquiferous stratums which has led to accident on the field. Operative geophysical investigations within an accident site were carried out by FSPEF and VERS geoelectric methods on 07.10.08 and 13.10.08 on the first stage. The primary goal of executed works was detection and mapping of gas penetration zones in aquiferous stratums of cross-section upper part, and also the determination of bedding depths and a total area of distribution of gas in upper aquiferous stratums. The anomalous zone were revealed and mapped by FSPEF survey. It is caused by raised migration of water in upper horizons of a cross-section. The depths of anomalous polarized layers (APL) of "gas" and „aquiferous stratum" type were defined by VERS method. The VERS data are presented by sounding diagram's and columns, by vertical cross-sections lengthways and transversely of gas penetration zones, by map of thicknesses of man-caused gas "deposit". The perforation on depths of 450 and 310 m was spent in a producing borehole on the first day investigation data. Gas discharges were received from 450 and 310 m depths. Three degassing boreholes have been drilled on 08.11.08 working day. Depths of wells are about 340 m. Gas inflows were received from 330 m depth. Drilling of fourth well was conducted. The anomalous zone area has decreased twice in comparison with two previous surveys. So, the
Ringard, Justine; Seyler, Frederique; Linguet, Laurent
2017-06-16
Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.
NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS
Directory of Open Access Journals (Sweden)
A. P. Kersting
2012-07-01
Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.
The Use of Electrical Resistivity Method to Mapping The Migration of Heavy Metals by Electrokinetic
Azhar, A. T. S.; Ayuni, S. A.; Ezree, A. M.; Nizam, Z. M.; Aziman, M.; Hazreek, Z. A. M.; Norshuhaila, M. S.; Zaidi, E.
2017-08-01
The presence of heavy metals contamination in soil environment highly needs innovative remediation. Basically, this contamination was resulted from ex-mining sites, motor workshop, petrol station, landfill and industrial sites. Therefore, soil treatment is very important due to metal ions are characterized as non-biodegradable material that may be harmful to ecological system, food chain, human health and groundwater sources. There are various techniques that have been proposed to eliminate the heavy metal contamination from the soil such as bioremediation, phytoremediation, electrokinetic remediation, solidification and stabilization. The selection of treatment needs to fulfill some criteria such as cost-effective, easy to apply, green approach and high remediation efficiency. Electrokinetic remediation technique (EKR) offers those solutions in certain area where other methods are impractical. While, electrical resistivity method offers an alternative geophysical technique for soil subsurface profiling to mapping the heavy metals migration by the influece of electrical gradient. Consequently, this paper presents an overview of the use of EKR to treat contaminated soil by using ERM method to verify their effectiveness to remove heavy metals.
Jasper, Cameron A.
Although aquifer recharge and recovery systems are a sustainable, decentralized, low cost, and low energy approach for the reclamation, treatment, and storage of post- treatment wastewater, they can suffer from poor infiltration rates and the development of a near-surface clogging layer within infiltration ponds. One such aquifer recharge and recovery system, the Aurora Water site in Colorado, U.S.A, functions at about 25% of its predicted capacity to recharge floodplain deposits by flooding infiltration ponds with post-treatment wastewater extracted from river bank aquifers along the South Platte River. The underwater self-potential method was developed to survey self-potential signals at the ground surface in a flooded infiltration pond for mapping infiltration pathways. A method for using heat as a groundwater tracer within the infiltration pond used an array of in situ high-resolution temperature sensing probes. Both relatively positive and negative underwater self-potential anomalies are consistent with observed recovery well pumping rates and specific discharge estimates from temperature data. Results from electrical resistivity tomography and electromagnetics surveys provide consistent electrical conductivity distributions associated with sediment textures. A lab method was developed for resistivity tests of near-surface sediment samples. Forward numerical modeling synthesizes the geophysical information to best match observed self- potential anomalies and provide permeability distributions, which is important for effective aquifer recharge and recovery system design, and optimization strategy development.
Directory of Open Access Journals (Sweden)
Ichiro IWASAKI
2010-06-01
Full Text Available Michael Porter’s concept of competitive advantages emphasizes the importance of regional cooperation of various actors in order to gain competitiveness on globalized markets. Foreign investors may play an important role in forming such cooperation networks. Their local suppliers tend to concentrate regionally. They can form, together with local institutions of education, research, financial and other services, development agencies, the nucleus of cooperative clusters. This paper deals with the relationship between supplier networks and clusters. Two main issues are discussed in more detail: the interest of multinational companies in entering regional clusters and the spillover effects that may stem from their participation. After the discussion on the theoretical background, the paper introduces a relatively new analytical method: “cluster mapping” - a method that can spot regional hot spots of specific economic activities with cluster building potential. Experience with the method was gathered in the US and in the European Union. After the discussion on the existing empirical evidence, the authors introduce their own cluster mapping results, which they obtained by using a refined version of the original methodology.
MAPPING LOCAL CLIMATE ZONES WITH A VECTOR-BASED GIS METHOD
Directory of Open Access Journals (Sweden)
E. Lelovics
2013-03-01
Full Text Available In this study we determined Local Climate Zones in a South-Hungarian city, using vector-based and raster-based databases. We calculated seven of the originally proposed ten physical (geometric, surface cover and radiative properties for areas which are based on the mobile temperature measurement campaigns earlier carried out in this city.As input data we applied 3D building database (earlier created with photogrammetric methods, 2D road database, topographic map, aerial photographs, remotely sensed reflectance information from RapidEye satellite image and our local knowledge about the area. The values of the properties were calculated by GIS methods developed for this purpose.We derived for the examined areas and applied for classification sky view factor, mean building height, terrain roughness class, building surface fraction, pervious surface fraction, impervious surface fraction and albedo.Six built and one land cover LCZ classes could be detected with this method on our study area. From each class one circle area was selected, which is representative for that class. Their thermal reactions were examined with the application of mobile temperature measurement dataset. The comparison was made in cases, when the weather was clear and calm and the surface was dry. We found that compact built-in types have more temperature surplus than open ones, and midrise types also have more than lowrise ones. According to our primary results, these categories provide a useful opportunity for intra- and inter-urban comparisons.
Directory of Open Access Journals (Sweden)
Ran Xu
2015-06-01
Full Text Available With the increasing awareness of energy efficiency, many old buildings have to undergo a massive facade energy retrofit. How to predict the visual impact which solar installations on the aesthetic cultural value of these buildings has been a heated debate in Switzerland (and throughout the world. The usual evaluation method to describe the visual impact of BIPV is based on semantic and qualitative descriptors, and strongly dependent on personal preferences. The evaluation scale is therefore relative, flexible and imprecise. This paper proposes a new method to accurately measure the visual impact which BIPV installations have on a historical building by using the saliency map method. By imitating working principles of the human eye, it is measured how much the BIPV design proposals differ from the original building facade in the aspect of attracting human visual attention. The result is directly presented in a quantitative manner, and can be used to compare the fitness of different BIPV design proposals. The measuring process is numeric, objective and more precise.
A Method of Vector Map Multi-scale Representation Considering User Interest on Subdivision Gird
Directory of Open Access Journals (Sweden)
YU Tong
2016-12-01
Full Text Available Compared with the traditional spatial data model and method, global subdivision grid show a great advantage in the organization and expression of massive spatial data. In view of this, a method of vector map multi-scale representation considering user interest on subdivision gird is proposed. First, the spatial interest field is built using a large number POI data to describe the spatial distribution of the user interest in geographic information. Second, spatial factor is classified and graded, and its representation scale range can be determined. Finally, different levels of subdivision surfaces are divided based on GeoSOT subdivision theory, and the corresponding relation of subdivision level and scale is established. According to the user interest of subdivision surfaces, the spatial feature can be expressed in different degree of detail. It can realize multi-scale representation of spatial data based on user interest. The experimental results show that this method can not only satisfy general-to-detail and important-to-secondary space cognitive demands of users, but also achieve better multi-scale representation effect.
New Method for the Calibration of Multi-Camera Mobile Mapping Systems
Kersting, A. P.; Habib, A.; Rau, J.
2012-07-01
Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.
Systematic methods for defining coarse-grained maps in large biomolecules.
Zhang, Zhiyong
2015-01-01
Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.
Influence of image reconstruction methods on statistical parametric mapping of brain PET images
International Nuclear Information System (INIS)
Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan
2007-01-01
Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)
International Nuclear Information System (INIS)
Park, Ji Eun; Sung, Yu Sub; Han, Kyung Hwa
2017-01-01
To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary
Energy Technology Data Exchange (ETDEWEB)
Park, Ji Eun; Sung, Yu Sub [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Han, Kyung Hwa [Dept. of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul (Korea, Republic of); and others
2017-11-15
To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary.
Temel, Senar
2016-01-01
This study aims to analyse prospective chemistry teachers' cognitive structures related to the subject of oxidation and reduction through a flow map method. Purposeful sampling method was employed in this study, and 8 prospective chemistry teachers from a group of students who had taken general chemistry and analytical chemistry courses were…
Directory of Open Access Journals (Sweden)
Chen Cao
2016-09-01
Full Text Available This study focused on producing flash flood hazard susceptibility maps (FFHSM using frequency ratio (FR and statistical index (SI models in the Xiqu Gully (XQG of Beijing, China. First, a total of 85 flash flood hazard locations (n = 85 were surveyed in the field and plotted using geographic information system (GIS software. Based on the flash flood hazard locations, a flood hazard inventory map was built. Seventy percent (n = 60 of the flooding hazard locations were randomly selected for building the models. The remaining 30% (n = 25 of the flooded hazard locations were used for validation. Considering that the XQG used to be a coal mining area, coalmine caves and subsidence caused by coal mining exist in this catchment, as well as many ground fissures. Thus, this study took the subsidence risk level into consideration for FFHSM. The ten conditioning parameters were elevation, slope, curvature, land use, geology, soil texture, subsidence risk area, stream power index (SPI, topographic wetness index (TWI, and short-term heavy rain. This study also tested different classification schemes for the values for each conditional parameter and checked their impacts on the results. The accuracy of the FFHSM was validated using area under the curve (AUC analysis. Classification accuracies were 86.61%, 83.35%, and 78.52% using frequency ratio (FR-natural breaks, statistical index (SI-natural breaks and FR-manual classification schemes, respectively. Associated prediction accuracies were 83.69%, 81.22%, and 74.23%, respectively. It was found that FR modeling using a natural breaks classification method was more appropriate for generating FFHSM for the Xiqu Gully.
Induced quantum conformal gravity
International Nuclear Information System (INIS)
Novozhilov, Y.V.; Vassilevich, D.V.
1988-11-01
Quantum gravity is considered as induced by matter degrees of freedom and related to the symmetry breakdown in the low energy region of a non-Abelian gauge theory of fundamental fields. An effective action for quantum conformal gravity is derived where both the gravitational constant and conformal kinetic term are positive. Relation with induced classical gravity is established. (author). 15 refs
Thickenings and conformal gravity
Lebrun, Claude
1991-07-01
A twistor correspondence is given for complex conformal space-times with vanishing Bach and Eastwood-Dighton tensors; when the Weyl curvature is algebraically general, these equations are precisely the conformal version of Einstein's vacuum equations with cosmological constant. This gives a fully curved version of the linearized correspondence of Baston and Mason [B-M].
Thickenings and conformal gravity
Energy Technology Data Exchange (ETDEWEB)
LeBrun, C. (State Univ. of New York, Stony Brook, NY (USA). Dept. of Mathematics)
1991-07-01
A twistor correspondence is given for complex conformal space-times with vanishing Bach and Eastwood-Dighton tensors; when the Weyl curvature is algebraically general, these equations are precisely the conformal version of Einstein's vacuum equations with cosmological constant. This gives a fully curved version of the linearized correspondence of Baston and Mason (B-M). (orig.).
Thickenings and conformal gravity
International Nuclear Information System (INIS)
LeBrun, C.
1991-01-01
A twistor correspondence is given for complex conformal space-times with vanishing Bach and Eastwood-Dighton tensors; when the Weyl curvature is algebraically general, these equations are precisely the conformal version of Einstein's vacuum equations with cosmological constant. This gives a fully curved version of the linearized correspondence of Baston and Mason [B-M]. (orig.)
Conformal transformations in superspace
International Nuclear Information System (INIS)
Dao Vong Duc
1977-01-01
The spinor extension of the conformal algebra is investigated. The transformation law of superfields under the conformal coordinate inversion R defined in the superspace is derived. Using R-technique, the superconformally covariant two-point and three-point correlation functions are found
Conformational stability of calreticulin
DEFF Research Database (Denmark)
Jørgensen, C.S.; Trandum, C.; Larsen, N.
2005-01-01
The conformational stability of calreticulin was investigated. Apparent unfolding temperatures (T-m) increased from 31 degrees C at pH 5 to 51 degrees C at pH 9, but electrophoretic analysis revealed that calreticulin oligomerized instead of unfolding. Structural analyses showed that the single C......-terminal a-helix was of major importance to the conformational stability of calreticulin....
van der Hilst, R. D.; de Hoop, M. V.; Shim, S. H.; Shang, X.; Wang, P.; Cao, Q.
2012-04-01
Over the past three decades, tremendous progress has been made with the mapping of mantle heterogeneity and with the understanding of these structures in terms of, for instance, the evolution of Earth's crust, continental lithosphere, and thermo-chemical mantle convection. Converted wave imaging (e.g., receiver functions) and reflection seismology (e.g. SS stacks) have helped constrain interfaces in crust and mantle; surface wave dispersion (from earthquake or ambient noise signals) characterizes wavespeed variations in continental and oceanic lithosphere, and body wave and multi-mode surface wave data have been used to map trajectories of mantle convection and delineate mantle regions of anomalous elastic properties. Collectively, these studies have revealed substantial ocean-continent differences and suggest that convective flow is strongly influenced by but permitted to cross the upper mantle transition zone. Many questions have remained unanswered, however, and further advances in understanding require more accurate depictions of Earth's heterogeneity at a wider range of length scales. To meet this challenge we need new observations—more, better, and different types of data—and methods that help us extract and interpret more information from the rapidly growing volumes of broadband data. The huge data volumes and the desire to extract more signal from them means that we have to go beyond 'business as usual' (that is, simplified theory, manual inspection of seismograms, …). Indeed, it inspires the development of automated full wave methods, both for tomographic delineation of smooth wavespeed variations and the imaging (for instance through inverse scattering) of medium contrasts. Adjoint tomography and reverse time migration, which are closely related wave equation methods, have begun to revolutionize seismic inversion of global and regional waveform data. In this presentation we will illustrate this development - and its promise - drawing from our work
A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms
Energy Technology Data Exchange (ETDEWEB)
Zhang, Da; Li, Xinhua; Liu, Bob [Division of Diagnostic Imaging Physics and Webster Center for Advanced Research and Education in Radiation, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Gao, Yiming; Xu, X. George [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)
2013-08-15
Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.Methods: A standard ATOM phantom has densely located holes (in 3 × 3 cm or 1.5 × 1.5 cm grids), which are too small (5 mm in diameter) to host many types of dosimeters, including the nanoDots. The authors modified the conventional way in which nanoDots are used, by removing the OSL disks from the holders before inserting them inside a standard ATOM phantom for dose measurements. The authors solved three technical difficulties introduced by this modification: (1) energy dependent dose calibration for raw OSL readings; (2) influence of the brief background exposure of OSL disks to dimmed room light; (3) correct pairing between the dose readings and measurement locations. The authors acquired 100 dose measurements at various positions in the phantom, which was scanned using a clinical chest protocol with both angular and z-axis tube current modulations.Results: Dose calibration was performed according to the beam qualities inside the phantom as determined from an established Monte Carlo model of the scanner. The influence of the brief exposure to dimmed room light was evaluated and deemed negligible. Pairing between the OSL readings and measurement locations was ensured by the experimental design. The organ doses measured for a routine adult chest scan protocol ranged from 9.4 to 18.8 mGy, depending on the composition, location, and surrounding anatomy of the organs. The dose distribution across different slices of the phantom strongly depended on the z-axis mA modulation. In the same slice, doses to the soft tissues other than the spinal cord demonstrated
A method to acquire CT organ dose map using OSL dosimeters and ATOM anthropomorphic phantoms
International Nuclear Information System (INIS)
Zhang, Da; Li, Xinhua; Liu, Bob; Gao, Yiming; Xu, X. George
2013-01-01
Purpose: To present the design and procedure of an experimental method for acquiring densely sampled organ dose map for CT applications, based on optically stimulated luminescence (OSL) dosimeters “nanoDots” and standard ATOM anthropomorphic phantoms; and to provide the results of applying the method—a dose data set with good statistics for the comparison with Monte Carlo simulation result in the future.Methods: A standard ATOM phantom has densely located holes (in 3 × 3 cm or 1.5 × 1.5 cm grids), which are too small (5 mm in diameter) to host many types of dosimeters, including the nanoDots. The authors modified the conventional way in which nanoDots are used, by removing the OSL disks from the holders before inserting them inside a standard ATOM phantom for dose measurements. The authors solved three technical difficulties introduced by this modification: (1) energy dependent dose calibration for raw OSL readings; (2) influence of the brief background exposure of OSL disks to dimmed room light; (3) correct pairing between the dose readings and measurement locations. The authors acquired 100 dose measurements at various positions in the phantom, which was scanned using a clinical chest protocol with both angular and z-axis tube current modulations.Results: Dose calibration was performed according to the beam qualities inside the phantom as determined from an established Monte Carlo model of the scanner. The influence of the brief exposure to dimmed room light was evaluated and deemed negligible. Pairing between the OSL readings and measurement locations was ensured by the experimental design. The organ doses measured for a routine adult chest scan protocol ranged from 9.4 to 18.8 mGy, depending on the composition, location, and surrounding anatomy of the organs. The dose distribution across different slices of the phantom strongly depended on the z-axis mA modulation. In the same slice, doses to the soft tissues other than the spinal cord demonstrated
International Nuclear Information System (INIS)
Feuvret, Loic; Noel, Georges; Mazeron, Jean-Jacques; Bey, Pierre
2006-01-01
We present a critical analysis of the conformity indices described in the literature and an evaluation of their field of application. Three-dimensional conformal radiotherapy, with or without intensity modulation, is based on medical imaging techniques, three-dimensional dosimetry software, compression accessories, and verification procedures. It consists of delineating target volumes and critical healthy tissues to select the best combination of beams. This approach allows better adaptation of the isodose to the tumor volume, while limiting irradiation of healthy tissues. Tools must be developed to evaluate the quality of proposed treatment plans. Dosimetry software provides the dose distribution in each CT section and dose-volume histograms without really indicating the degree of conformity. The conformity index is a complementary tool that attributes a score to a treatment plan or that can compare several treatment plans for the same patient. The future of conformal index in everyday practice therefore remains unclear
Seong, Han Yu; Cho, Ji Young; Choi, Byeong Sam; Min, Joong Kee; Kim, Yong Hwan; Roh, Sung Woo; Kim, Jeong Hoon; Jeon, Sang Ryong
2014-01-01
Intracortical microstimulation (ICMS) is a technique that was developed to derive movement representation of the motor cortex. Although rats are now commonly used in motor mapping studies, the precise characteristics of rat motor map, including symmetry and consistency across animals, and the possibility of repeated stimulation have not yet been established. We performed bilateral hindlimb mapping of motor cortex in six Sprague-Dawley rats using ICMS. ICMS was applied to the left and the righ...
Conformal FDTD modeling wake fields
Energy Technology Data Exchange (ETDEWEB)
Jurgens, T.; Harfoush, F.
1991-05-01
Many computer codes have been written to model wake fields. Here we describe the use of the Conformal Finite Difference Time Domain (CFDTD) method to model the wake fields generated by a rigid beam traveling through various accelerating structures. The non- cylindrical symmetry of some of the problems considered here requires the use of a three dimensional code. In traditional FDTD codes, curved surfaces are approximated by rectangular steps. The errors introduced in wake field calculations by such an approximation can be reduced by increasing the mesh size, therefore increasing the cost of computing. Another approach, validated here, deforms Ampere and Faraday contours near a media interface so as to conform to the interface. These improvements of the FDTD method result in better accuracy of the fields at asymptotically no computational cost. This method is also capable of modeling thin wires as found in beam profile monitors, and slots and cracks as found in resistive wall motions. 4 refs., 5 figs.
Czech Academy of Sciences Publication Activity Database
Setnička, V.; Hlaváček, Jan; Urbanová, J.
2009-01-01
Roč. 15, č. 8 (2009), s. 533-539 ISSN 1075-2617 Institutional research plan: CEZ:AV0Z40550506 Keywords : cationic peptides * conformation * thermal stability * vibrational circular dichroism Subject RIV: CC - Organic Chemistry Impact factor: 1.807, year: 2009
Chojniak, Rubens; Carneiro, Dominique Piacenti; Moterani, Gustavo Simonetto Peres; Duarte, Ivone da Silva; Bitencourt, Almir Galvão Vieira; Muglia, Valdair Francisco; D'Ippolito, Giuseppe
2017-01-01
To map the different methods for diagnostic imaging instruction at medical schools in Brazil. In this cross-sectional study, a questionnaire was sent to each of the coordinators of 178 Brazilian medical schools. The following characteristics were assessed: teaching model; total course hours; infrastructure; numbers of students and professionals involved; themes addressed; diagnostic imaging modalities covered; and education policies related to diagnostic imaging. Of the 178 questionnaires sent, 45 (25.3%) were completed and returned. Of those 45 responses, 17 (37.8%) were from public medical schools, whereas 28 (62.2%) were from private medical schools. Among the 45 medical schools evaluated, the method of diagnostic imaging instruction was modular at 21 (46.7%), classic (independent discipline) at 13 (28.9%), hybrid (classical and modular) at 9 (20.0%), and none of the preceding at 3 (6.7%). Diagnostic imaging is part of the formal curriculum at 36 (80.0%) of the schools, an elective course at 3 (6.7%), and included within another modality at 6 (13.3%). Professors involved in diagnostic imaging teaching are radiologists at 43 (95.5%) of the institutions. The survey showed that medical courses in Brazil tend to offer diagnostic imaging instruction in courses that include other content and at different time points during the course. Radiologists are extensively involved in undergraduate medical education, regardless of the teaching methodology employed at the institution.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
Spectral map-analysis: a method to analyze gene expression data
Bijnens, Luc J.M.; Lewi, Paul J.; Göhlmann, Hinrich W.; Molenberghs, Geert; Wouters, Luc
2004-01-01
bioinformatics; biplot; correspondence factor analysis; data mining; data visualization; gene expression data; microarray data; multivariate exploratory data analysis; principal component analysis; Spectral map analysis
A comparison of contour maps derived from independent methods of measuring lunar magnetic fields
Lichtenstein, B. R.; Coleman, P. J., Jr.; Russell, C. T.
1978-01-01
Computer-generated contour maps of strong lunar remanent magnetic fields are presented and discussed. The maps, obtained by previously described (Eliason and Soderblom, 1977) techniques, are derived from a variety of direct and indirect measurements from Apollo 15 and 16 and Explorer 35 magnetometer and electron reflection data. A common display format is used to facilitate comparison of the maps over regions of overlapping coverage. Most large scale features of either weak or strong magnetic field regions are found to correlate fairly well on all the maps considered.
Stochastic geometry of critical curves, Schramm-Loewner evolutions and conformal field theory
International Nuclear Information System (INIS)
Gruzberg, Ilya A
2006-01-01
Conformally invariant curves that appear at critical points in two-dimensional statistical mechanics systems and their fractal geometry have received a lot of attention in recent years. On the one hand, Schramm (2000 Israel J. Math. 118 221 (Preprint math.PR/9904022)) has invented a new rigorous as well as practical calculational approach to critical curves, based on a beautiful unification of conformal maps and stochastic processes, and by now known as Schramm-Loewner evolution (SLE). On the other hand, Duplantier (2000 Phys. Rev. Lett. 84 1363; Fractal Geometry and Applications: A Jubilee of Benot Mandelbrot: Part 2 (Proc. Symp. Pure Math. vol 72) (Providence, RI: American Mathematical Society) p 365 (Preprint math-ph/0303034)) has applied boundary quantum gravity methods to calculate exact multifractal exponents associated with critical curves. In the first part of this paper, I provide a pedagogical introduction to SLE. I present mathematical facts from the theory of conformal maps and stochastic processes related to SLE. Then I review basic properties of SLE and provide practical derivation of various interesting quantities related to critical curves, including fractal dimensions and crossing probabilities. The second part of the paper is devoted to a way of describing critical curves using boundary conformal field theory (CFT) in the so-called Coulomb gas formalism. This description provides an alternative (to quantum gravity) way of obtaining the multifractal spectrum of critical curves using only traditional methods of CFT based on free bosonic fields
First 3D thermal mapping of an active volcano using an advanced photogrammetric method
Antoine, Raphael; Baratoux, David; Lacogne, Julien; Lopez, Teodolina; Fauchard, Cyrille; Bretar, Frédéric; Arab-Sedze, Mélanie; Staudacher, Thomas; Jacquemoud, Stéphane; Pierrot-Deseilligny, Marc
2014-05-01
Thermal infrared data obtained in the [7-14 microns] spectral range are usually used in many Earth Science disciplines. These studies are exclusively based on the analysis of 2D information. In this case, a quantitative analysis of the surface energy budget remains limited, as it may be difficult to estimate the radiative contribution of the topography, the thermal influence of winds on the surface or potential imprints of subsurface flows on the soil without any precise DEM. The draping of a thermal image on a recent DEM is a common method to obtain a 3D thermal map of a surface. However, this method has many disadvantages i) errors can be significant in the orientation process of the thermal images, due to the lack of tie points between the images and the DEM; ii) the use of a recent DEM implies the use of another remote sensing technique to quantify the topography; iii) finally, the characterization of the evolution of a surface requires the simultaneous acquisition of thermal data and topographic information, which may be expensive in most cases. The stereophotogrammetry method allows to reconstitute the relief of an object from photos taken from different positions. Recently, substantial progress have been realized in the generation of high spatial resolution topographic surfaces using stereophotogrammetry. However, the presence of shadows, homogeneous textures and/or weak contrasts in the visible spectrum (e.g., flowing lavas, uniform lithologies) may prevent from the use of such method, because of the difficulties to find tie points on each image. Such situations are more favorable in the thermal infrared spectrum, as any variation in the thermal properties or geometric orientation of the surfaces may induce temperature contrasts that are detectable with a thermal camera. This system, usually functioning with a array sensor (Focal Plane Array) and an optical device, have geometric characteristics that are similar to digital cameras. Thus, it may be possible
International Nuclear Information System (INIS)
Fijalkowski, M.; Bialas, B.; Maciejewski, B.; Bystrzycka, J.; Slosarek, K.
2005-01-01
Recently, the system for conformal real-time high-dose-rate brachytherapy has been developed and dedicated in general for the treatment of prostate cancer. The aim of this paper is to present the 3D-conformal real-time brachytherapy technique introduced to clinical practice at the Institute of Oncology in Gliwice. Equipment and technique of 3D-conformal real time brachytherapy (3D-CBRT) is presented in detail and compared with conventional high-dose-rate brachytherapy. Step-by-step procedures of treatment planning are described, including own modifications. The 3D-CBRT offers the following advantages: (1) on-line continuous visualization of the prostate and acquisition of the series of NS images during the entire procedure of planning and treatment; (2) high precision of definition and contouring the target volume and the healthy organs at risk (urethra, rectum, bladder) based on 3D transrectal continuous ultrasound images; (3) interactive on-line dose optimization with real-time corrections of the dose-volume histograms (DVHs) till optimal dose distribution is achieved; (4) possibility to overcome internal prostate motion and set-up inaccuracies by stable positioning of the prostate with needles fixed to the template; (5) significant shortening of overall treatment time; (6) cost reduction - the treatment can be provided as an outpatient procedure. The 3D- real time CBRT can be advertised as an ideal conformal boost dose technique integrated or interdigitated with pelvic conformal external beam radiotherapy or as a monotherapy for prostate cancer. (author)
Directory of Open Access Journals (Sweden)
Jörg Rekittke
2011-10-01
Full Text Available The research project “Grassroots GIS” focuses on the development of low-cost mapping and publishing methods for slums and slum-upgrading projects in Manila. In this project smartphones, collaborative mapping and 3D visualization applications are systematically employed to support landscape architectural analysis and design work in the context of urban poverty and urban informal settlements. In this paper we focus on the description of the developed methods and present preliminary results of this work-in-progress.
Multiresolution Computation of Conformal Structures of Surfaces
Directory of Open Access Journals (Sweden)
Xianfeng Gu
2003-10-01
Full Text Available An efficient multiresolution method to compute global conformal structures of nonzero genus triangle meshes is introduced. The homology, cohomology groups of meshes are computed explicitly, then a basis of harmonic one forms and a basis of holomorphic one forms are constructed. A progressive mesh is generated to represent the original surface at different resolutions. The conformal structure is computed for the coarse level first, then used as the estimation for that of the finer level, by using conjugate gradient method it can be refined to the conformal structure of the finer level.
Methods for geographical mapping of agricultural activities and the related environmental impact
DEFF Research Database (Denmark)
Dalgaard, Tommy; Jensen, Jørgen Dejgaard
2011-01-01
This study presents a three-step methodology to generate, map and simulate indicators of agricultural activity for use in landscape-scale analyses. Step one is the farm data set up combining digital agricultural registers and national statistics. Step two is the geographical mapping based discrete...
Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method
Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping
2017-05-01
Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m
Directory of Open Access Journals (Sweden)
Jurado-Piña, R.
2014-12-01
Full Text Available When designing a tension structure the shape is not known at the beginning of the process. Form-finding methods allow the designer to obtain an initial shape from given boundary conditions. Several form-finding methods for tension structures are already available in the technical literature; all of them posses certain limitations and drawbacks and no single method is optimal for all problems. The engineer may select the proper combination of methods best suited to the designer’s needs. In this paper it is proposed a combined method to achieve satisfactory equilibrium configurations for fabric tension structures. The force density method (FDM implemented with topological mapping (TM is used as a search engine for the preliminary design, and a procedure that employs nonlinear structural analysis is proposed for final refinement of the initial equilibrium configuration hence allowing the use of the same analysis tool for both refinement of the solution and analysis under loading.Al diseñar una estructura tensada la forma inicial es normalmente desconocida. Los métodos de búsqueda de forma permiten al ingeniero obtener una geometría inicial dadas unas condiciones de contorno. Existen diferentes métodos de búsqueda de formas de equilibrio, pero todos tienen limitaciones y no existe uno único óptimo para cualquier tipo de problema. El ingeniero debe elegir la combinación de métodos que mejor se adapte a sus necesidades. En este artículo se propone un método combinado para generar configuraciones de equilibrio satisfactorias en estructuras tensadas. Como motor de búsqueda para el diseño preliminar se emplea el método de las densidades de fuerza (FDM implementado con mallado en topología (TM, y se propone un procedimiento basado en análisis no lineal de estructuras para el refinamiento de la configuración inicial de equilibrio, permitiéndose así el empleo de las mismas herramientas tanto para el refinamiento de la solución inicial
New methods to enhance cerebral flow maps made by the stable xenon/CT technique
International Nuclear Information System (INIS)
Wist, A.O.; Fatouros, P.P.; Kishore, P.R.S.; Weiss, J.; Cothran, S.J.
1987-01-01
The authors developed several new techniques to extract the important information of the high-resolution flow maps as they are being generated by our improved stable Xe/CT technique. First, they adapted a new morphologic filtering technique to separate white, white/gray and gray matter. Second, they generated iso-flow lines using the same filtering technique for easier reading of the values in the flow map. Third, by combining the information in both maps, the authors constructed a new map which shows the areas of high, normal, and low blood flow for the whole brain. When combined with anatomic information, the information in the map can indicate the probable pathologic areas. Fourth, they were able to reduce the calculation time of the flow by almost a factor of 10 by developing a new, faster algorithm for calculating the flow
The importance of magnetic methods for soil mapping and process modelling. Case study in Ukraine
Menshov, Oleksandr; Pereira, Paulo; Kruglov, Oleksandr; Sukhorada, Anatoliy
2016-04-01
The correct planning of agriculture areas is fundamental for a sustainable future in Ukraine. After the recent political problems in Ukraine, new challenges emerged regarding sustainability questions. At the same time the soil mapping and modelling are intensively developing all over the world (Pereira et al., 2015; Brevik et al., in press). Magnetic susceptibility (MS) methods are low cost and accurate for the developing maps of agricultural areas, fundamental for Ukrain's economy.This allow to colleact a great amount of soil data, usefull for a better understading of the spatial distribution of soil properties. Recently, this method have been applied in other works in Ukraine and elsewhere (Jordanova et al., 2011; Menshov et al., 2015). The objective of this work is to study the spatial distribution of MS and humus content on the topsoils (0-5 cm) in two different areas. The first is located in Poltava region and the second in Kharkiv region. The results showed that MS depends of soil type, topography and anthropogenic influence. For the interpretation of MS spatial distribution in top soil we consider the frequency and time after the last tillage, tilth depth, fertilizing, and the puddling regarding the vehicle model. On average the soil MS of the top soil of these two cases is about 30-70×10-8 m3/kg. In Poltava region not disturbed soil has on average MS values of 40-50×10-8 m3/kg, for Kharkiv region 50-60×10-8 m3/kg. The tilled soil of Poltava region has on average an MS of 60×10-8 m3/kg, and 70×10-8 m3/kg in Kharkiv region. MS is higher in non-tilled soils than in the tilled ones. The correlation between MS and soil humus content is very high ( up to 0.90) in both cases. Breivik, E., Baumgarten, A., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Jordán, A. Soil mapping, classification, and modelling: history and future directions. Geoderma (in press), doi:10.1016/j.geoderma.2015.05.017 Jordanova D., Jordanova N., Atanasova A., Tsacheva T., Petrov P
Selective stimulation of conformational conversions in free molecules
International Nuclear Information System (INIS)
Ismailzade, G.I.; Movsumov, I.Z.; Menzeleev, M.R.; Kazymova, S.B.
2014-01-01
Application of double-resonance (RF-MW, IR-MW, MW-MW) methods to enhance studies of unstable isomeric structures was discussed. The use of infrared pump radiation to excite conformational energy levels in order to stimulate selectively conformational conversions and to correct spectral line intensities of separate conformations was substantiated. (authors)
Energy Technology Data Exchange (ETDEWEB)
Eley, John G.; Hogstrom, Kenneth R.; Matthews, Kenneth L.; Parker, Brent C.; Price, Michael J. [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Tower Drive, Baton Rouge, Louisiana 70803-4001 (United States); Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Tower Drive, Baton Rouge, Louisiana 70803-4001 (United States) and Mary Bird Perkins Cancer Center, 4950 Essen Lane, Baton Rouge, Louisiana 70809-3482 (United States); Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Tower Drive, Baton Rouge, Louisiana 70803-4001 (United States); Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Tower Drive, Baton Rouge, Louisiana 70803-4001 (United States) and Mary Bird Perkins Cancer Center, 4950 Essen Lane, Baton Rouge, Louisiana 70809-3482 (United States)
2011-12-15
Purpose: The purpose of this work was to investigate the potential of discrete Gaussian edge feathering of the higher energy electron fields for improving abutment dosimetry in the planning volume when using an electron multileaf collimator (eMLC) to deliver segmented-field electron conformal therapy (ECT). Methods: A discrete (five-step) Gaussian edge spread function was used to match dose penumbras of differing beam energies (6-20 MeV) at a specified depth in a water phantom. Software was developed to define the leaf eMLC positions of an eMLC that most closely fit each electron field shape. The effect of 1D edge feathering of the higher energy field on dose homogeneity was computed and measured for segmented-field ECT treatment plans for three 2D PTVs in a water phantom, i.e., depth from the water surface to the distal PTV surface varied as a function of the x-axis (parallel to leaf motion) and remained constant along the y-axis (perpendicular to leaf motion). Additionally, the effect of 2D edge feathering was computed and measured for one radially symmetric, 3D PTV in a water phantom, i.e., depth from the water surface to the distal PTV surface varied as a function of both axes. For the 3D PTV, the feathering scheme was evaluated for 0.1-1.0-cm leaf widths. Dose calculations were performed using the pencil beam dose algorithm in the Pinnacle{sup 3} treatment planning system. Dose verification measurements were made using a prototype eMLC (1-cm leaf width). Results: 1D discrete Gaussian edge feathering reduced the standard deviation of dose in the 2D PTVs by 34, 34, and 39%. In the 3D PTV, the broad leaf width (1 cm) of the eMLC hindered the 2D application of the feathering solution to the 3D PTV, and the standard deviation of dose increased by 10%. However, 2D discrete Gaussian edge feathering with simulated eMLC leaf widths of 0.1-0.5 cm reduced the standard deviation of dose in the 3D PTV by 33-28%, respectively. Conclusions: A five-step discrete Gaussian edge
Conformal sequestering simplified
International Nuclear Information System (INIS)
Schmaltz, Martin; Sundrum, Raman
2006-01-01
Sequestering is important for obtaining flavor-universal soft masses in models where supersymmetry breaking is mediated at high scales. We construct a simple and robust class of hidden sector models which sequester themselves from the visible sector due to strong and conformally invariant hidden dynamics. Masses for hidden matter eventually break the conformal symmetry and lead to supersymmetry breaking by the mechanism recently discovered by Intriligator, Seiberg and Shih. We give a unified treatment of subtleties due to global symmetries of the CFT. There is enough review for the paper to constitute a self-contained account of conformal sequestering