Partial distance correlation with methods for dissimilarities
Székely, Gábor J.; Rizzo, Maria L.
2014-01-01
Distance covariance and distance correlation are scalar coefficients that characterize independence of random vectors in arbitrary dimension. Properties, extensions, and applications of distance correlation have been discussed in the recent literature, but the problem of defining the partial distance correlation has remained an open question of considerable interest. The problem of partial distance correlation is more complex than partial correlation partly because the squared distance covari...
Distance correlation methods for discovering associations in large astrophysical databases
International Nuclear Information System (INIS)
Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P.
2014-01-01
High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.
Entropy correlation distance method. The Euro introduction effect on the Consumer Price Index
Miśkiewicz, Janusz
2010-04-01
The idea of entropy was introduced in thermodynamics, but it can be used in time series analysis. There are various ways to define and measure the entropy of a system. Here the so called Theil index, which is often used in economy and finance, is applied as it were an entropy measure. In this study the time series are remapped through the Theil index. Then the linear correlation coefficient between the remapped time series is evaluated as a function of time and time window size and the corresponding statistical distance is defined. The results are compared with the the usual correlation distance measure for the time series themselves. As an example this entropy correlation distance method (ECDM) is applied to several series, as those of the Consumer Price Index (CPI) in order to test some so called globalisation processes. Distance matrices are calculated in order to construct two network structures which are next analysed. The role of two different time scales introduced by the Theil index and a correlation coefficient is also discussed. The evolution of the mean distance between the most developed countries is presented and the globalisation periods of the prices discussed. It is finally shown that the evolution of mean distance between the most developed countries on several networks follows the process of introducing the European currency - the Euro. It is contrasted to the GDP based analysis. It is stressed that the entropy correlation distance measure is more suitable in detecting significant changes, like a globalisation process than the usual statistical (correlation based) measure.
Directory of Open Access Journals (Sweden)
V. L. Kozlov
2018-01-01
Full Text Available To solve the problem of increasing the accuracy of restoring a three-dimensional picture of space using two-dimensional digital images, it is necessary to use new effective techniques and algorithms for processing and correlation analysis of digital images. Actively developed tools that allow you to reduce the time costs for processing stereo images, improve the quality of the depth maps construction and automate their construction. The aim of the work is to investigate the possibilities of using various techniques for processing digital images to improve the measurements accuracy of the rangefinder based on the correlation analysis of the stereo image. The results of studies of the influence of color channel mixing techniques on the distance measurements accuracy for various functions realizing correlation processing of images are presented. Studies on the analysis of the possibility of using integral representation of images to reduce the time cost in constructing a depth map areproposed. The results of studies of the possibility of using images prefiltration before correlation processing when distance measuring by stereo imaging areproposed.It is obtained that using of uniform mixing of channels leads to minimization of the total number of measurement errors, and using of brightness extraction according to the sRGB standard leads to an increase of errors number for all of the considered correlation processing techniques. Integral representation of the image makes it possible to accelerate the correlation processing, but this method is useful for depth map calculating in images no more than 0.5 megapixels. Using of image filtration before correlation processing can provide, depending on the filter parameters, either an increasing of the correlation function value, which is useful for analyzing noisy images, or compression of the correlation function.
Shinogle-Decker, Heather; Martinez-Rivera, Noraida; O'Brien, John; Powell, Richard D.; Joshi, Vishwas N.; Connell, Samuel; Rosa-Molinar, Eduardo
2018-02-01
A new correlative Förster Resonance Energy Transfer (FRET) microscopy method using FluoroNanogold™, a fluorescent immunoprobe with a covalently attached Nanogold® particle (1.4nm Au), overcomes resolution limitations in determining distances within synaptic nanoscale architecture. FRET by acceptor photobleaching has long been used as a method to increase fluorescence resolution. The transfer of energy from a donor to an acceptor generally occurs between 10-100Å, which is the relative distance between the donor molecule and the acceptor molecule. For the correlative FRET microscopy method using FluoroNanogold™, we immuno-labeled GFP-tagged-HeLa-expressing Connexin 35 (Cx35) with anti-GFP and with anti-Cx35/36 antibodies, and then photo-bleached the Cx before processing the sample for electron microscopic imaging. Preliminary studies reveal the use of Alexa Fluor® 594 FluoroNanogold™ slightly increases FRET distance to 70Å, in contrast to the 62.5Å using AlexaFluor 594®. Preliminary studies also show that using a FluoroNanogold™ probe inhibits photobleaching. After one photobleaching session, Alexa Fluor 594® fluorescence dropped to 19% of its original fluorescence; in contrast, after one photobleaching session, Alexa Fluor 594® FluoroNanogold™ fluorescence dropped to 53% of its original intensity. This result confirms that Alexa Fluor 594® FluoroNanogold™ is a much better donor probe than is Alexa Fluor 594®. The new method (a) creates a double confirmation method in determining structure and orientation of synaptic architecture, (b) allows development of a two-dimensional in vitro model to be used for precise testing of multiple parameters, and (c) increases throughput. Future work will include development of FluoroNanogold™ probes with different sizes of gold for additional correlative microscopy studies.
Zhang, Qingyang
2018-05-16
Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.
Correlation function of the luminosity distances
Energy Technology Data Exchange (ETDEWEB)
Biern, Sang Gyu; Yoo, Jaiyul, E-mail: sgbiern@physik.uzh.ch, E-mail: jyoo@physik.uzh.ch [Center for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zürich, Winterthurerstrasse 190, CH-8057, Zürich (Switzerland)
2017-09-01
We present the correlation function of the luminosity distances in a flat ΛCDM universe. Decomposing the luminosity distance fluctuation into the velocity, the gravitational potential, and the lensing contributions in linear perturbation theory, we study their individual contributions to the correlation function. The lensing contribution is important at large redshift ( z ∼> 0.5) but only for small angular separation (θ ∼< 3°), while the velocity contribution dominates over the other contributions at low redshift or at larger separation. However, the gravitational potential contribution is always subdominant at all scale, if the correct gauge-invariant expression is used. The correlation function of the luminosity distances depends significantly on the matter content, especially for the lensing contribution, thus providing a novel tool of estimating cosmological parameters.
Distance sampling methods and applications
Buckland, S T; Marques, T A; Oedekoven, C S
2015-01-01
In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website. Some of the case studies use the software Distance, while others use R code. The book is in three parts. The first part addresses basic methods, the ...
Measuring and testing dependence by correlation of distances
Székely, Gábor J.; Rizzo, Maria L.; Bakirov, Nail K.
2007-01-01
Distance correlation is a new measure of dependence between random vectors. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but unlike the classical definition of correlation, distance correlation is zero only if the random vectors are independent. The empirical distance dependence measures are based on certain Euclidean distances between sample elements rather than sample moments, yet have a compact representation analogous to the clas...
Determining distances using asteroseismic methods
DEFF Research Database (Denmark)
Aguirre, Victor Silva; Casagrande, L.; Basu, Sarbina
2013-01-01
Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification of the r......Asteroseismology has been extremely successful in determining the properties of stars in different evolutionary stages with a remarkable level of precision. However, to fully exploit its potential, robust methods for estimating stellar parameters are required and independent verification...... fluxes, and thus distances for field stars in a self-consistent manner. Applying our method to a sample of solar-like oscillators in the {\\it Kepler} field that have accurate {\\it Hipparcos} parallaxes, we find agreement in our distance determinations to better than 5%. Comparison with measurements...
INTERPRETING THE DISTANCE CORRELATION RESULTS FOR THE COMBO-17 SURVEY
International Nuclear Information System (INIS)
Richards, Mercedes T.; Richards, Donald St. P.; Martínez-Gómez, Elizabeth
2014-01-01
The accurate classification of galaxies in large-sample astrophysical databases of galaxy clusters depends sensitively on the ability to distinguish between morphological types, especially at higher redshifts. This capability can be enhanced through a new statistical measure of association and correlation, called the distance correlation coefficient, which has more statistical power to detect associations than does the classical Pearson measure of linear relationships between two variables. The distance correlation measure offers a more precise alternative to the classical measure since it is capable of detecting nonlinear relationships that may appear in astrophysical applications. We showed recently that the comparison between the distance and Pearson correlation coefficients can be used effectively to isolate potential outliers in various galaxy data sets, and this comparison has the ability to confirm the level of accuracy associated with the data. In this work, we elucidate the advantages of distance correlation when applied to large databases. We illustrate how the distance correlation measure can be used effectively as a tool to confirm nonlinear relationships between various variables in the COMBO-17 database, including the lengths of the major and minor axes, and the alternative redshift distribution. For these outlier pairs, the distance correlation coefficient is routinely higher than the Pearson coefficient since it is easier to detect nonlinear relationships with distance correlation. The V-shaped scatter plots of Pearson versus distance correlation coefficients also reveal the patterns with increasing redshift and the contributions of different galaxy types within each redshift range
A study of metrics of distance and correlation between ranked lists for compositionality detection
DEFF Research Database (Denmark)
Lioma, Christina; Hansen, Niels Dalum
2017-01-01
affects the measurement of semantic similarity. We propose a new compositionality detection method that represents phrases as ranked lists of term weights. Our method approximates the semantic similarity between two ranked list representations using a range of well-known distance and correlation metrics...... of compositionality using any of the distance and correlation metrics considered....
International Nuclear Information System (INIS)
Ferguson, A.J.
1974-01-01
An outline of the theory of angular correlations is presented, and the difference between the modern density matrix method and the traditional wave function method is stressed. Comments are offered on particular angular correlation theoretical techniques. A brief discussion is given of recent studies of gamma ray angular correlations of reaction products recoiling with high velocity into vacuum. Two methods for optimization to obtain the most accurate expansion coefficients of the correlation are discussed. (1 figure, 53 references) (U.S.)
Correlation measure to detect time series distances, whence economy globalization
Miśkiewicz, Janusz; Ausloos, Marcel
2008-11-01
An instantaneous time series distance is defined through the equal time correlation coefficient. The idea is applied to the Gross Domestic Product (GDP) yearly increments of 21 rich countries between 1950 and 2005 in order to test the process of economic globalisation. Some data discussion is first presented to decide what (EKS, GK, or derived) GDP series should be studied. Distances are then calculated from the correlation coefficient values between pairs of series. The role of time averaging of the distances over finite size windows is discussed. Three network structures are next constructed based on the hierarchy of distances. It is shown that the mean distance between the most developed countries on several networks actually decreases in time, -which we consider as a proof of globalization. An empirical law is found for the evolution after 1990, similar to that found in flux creep. The optimal observation time window size is found ≃15 years.
Modern Geometric Methods of Distance Determination
Thévenin, Frédéric; Falanga, Maurizio; Kuo, Cheng Yu; Pietrzyński, Grzegorz; Yamaguchi, Masaki
2017-11-01
Building a 3D picture of the Universe at any distance is one of the major challenges in astronomy, from the nearby Solar System to distant Quasars and galaxies. This goal has forced astronomers to develop techniques to estimate or to measure the distance of point sources on the sky. While most distance estimates used since the beginning of the 20th century are based on our understanding of the physics of objects of the Universe: stars, galaxies, QSOs, the direct measures of distances are based on the geometric methods as developed in ancient Greece: the parallax, which has been applied to stars for the first time in the mid-19th century. In this review, different techniques of geometrical astrometry applied to various stellar and cosmological (Megamaser) objects are presented. They consist in parallax measurements from ground based equipment or from space missions, but also in the study of binary stars or, as we shall see, of binary systems in distant extragalactic sources using radio telescopes. The Gaia mission will be presented in the context of stellar physics and galactic structure, because this key space mission in astronomy will bring a breakthrough in our understanding of stars, galaxies and the Universe in their nature and evolution with time. Measuring the distance to a star is the starting point for an unbiased description of its physics and the estimate of its fundamental parameters like its age. Applying these studies to candles such as the Cepheids will impact our large distance studies and calibration of other candles. The text is constructed as follows: introducing the parallax concept and measurement, we shall present briefly the Gaia satellite which will be the future base catalogue of stellar astronomy in the near future. Cepheids will be discussed just after to demonstrate the state of the art in distance measurements in the Universe with these variable stars, with the objective of 1% of error in distances that could be applied to our closest
Long-distance asymptotics of temperature correlators of the impenetrable Bose gas
International Nuclear Information System (INIS)
Its, A.R.; Izergin, A.G.; Korepin, V.E.
1989-06-01
The inverse scattering method is applied to the integrable nonlinear system describing temperature correlators of the impenetrable bosons in one space dimension. The corresponding matrix Riemann problems are constructed for two-point as well as for multi-point correlators. Long-distance asymptotics of two-point correlators is calculated. (author). 8 refs
Long-distance behavior of temperature correlation functions in the one-dimensional Bose gas
Energy Technology Data Exchange (ETDEWEB)
Kozlowski, K.K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Maillet, J.M. [UMR 5672 du CNRS, ENS Lyon (France). Lab. de Physique; Slavnov, N.A. [Steklov Mathematical Institute, Moscow (Russian Federation)
2010-12-15
We describe a Bethe ansatz based method to derive, starting from a multiple integral representation, the long-distance asymptotic behavior at finite temperature of the density-density correlation function in the interacting onedimensional Bose gas. We compute the correlation lengths in terms of solutions of non-linear integral equations of the thermodynamic Bethe ansatz type. Finally, we establish a connection between the results obtained in our approach with the correlation lengths stemming from the quantum transfer matrix method. (orig.)
Neural correlates of the numerical distance effect in children
Directory of Open Access Journals (Sweden)
Christophe eMussolin
2013-10-01
Full Text Available In number comparison tasks, the performance is better when the distance between the two numbers to compare increases. During development this so-called numerical distance effect decreases with age and the neuroanatomical correlates of these age-related changes are poorly known. Using functional magnetic resonance imaging, we recorded the brain activity changes in children aged from 8 to 14 years while they performed a number comparison task on pairs of Arabic digits and a control colour comparison task on non-numerical symbols. On the one hand, we observed developmental changes in the recruitment of frontal regions and the left intraparietal sulcus, with lower activation as the age increased. On the other hand, we found that a behavioural index of selective sensitivity to the numerical distance effect was positively correlated with higher brain activity in a right lateralized occipito-temporo-parietal network including the intraparietal sulcus. This leads us to propose that the left intraparietal sulcus would be engaged in the refinement of cognitive processes involved in number comparison during development, while the right intraparietal sulcus would underlie the semantic representation of numbers and its activation would be mainly affected by the numerical proximity between them.
Interplay between strong correlation and adsorption distances: Co on Cu(001)
Bahlke, Marc Philipp; Karolak, Michael; Herrmann, Carmen
2018-01-01
Adsorbed transition metal atoms can have partially filled d or f shells due to strong on-site Coulomb interaction. Capturing all effects originating from electron correlation in such strongly correlated systems is a challenge for electronic structure methods. It requires a sufficiently accurate description of the atomistic structure (in particular bond distances and angles), which is usually obtained from first-principles Kohn-Sham density functional theory (DFT), which due to the approximate nature of the exchange-correlation functional may provide an unreliable description of strongly correlated systems. To elucidate the consequences of this popular procedure, we apply a combination of DFT with the Anderson impurity model (AIM), as well as DFT + U for a calculation of the potential energy surface along the Co/Cu(001) adsorption coordinate, and compare the results with those obtained from DFT. The adsorption minimum is shifted towards larger distances by applying DFT+AIM, or the much cheaper DFT +U method, compared to the corresponding spin-polarized DFT results, by a magnitude comparable to variations between different approximate exchange-correlation functionals (0.08 to 0.12 Å). This shift originates from an increasing correlation energy at larger adsorption distances, which can be traced back to the Co 3 dx y and 3 dz2 orbitals being more correlated as the adsorption distance is increased. We can show that such considerations are important, as they may strongly affect electronic properties such as the Kondo temperature.
Phylo_dCor: distance correlation as a novel metric for phylogenetic profiling.
Sferra, Gabriella; Fratini, Federica; Ponzi, Marta; Pizzi, Elisabetta
2017-09-05
Elaboration of powerful methods to predict functional and/or physical protein-protein interactions from genome sequence is one of the main tasks in the post-genomic era. Phylogenetic profiling allows the prediction of protein-protein interactions at a whole genome level in both Prokaryotes and Eukaryotes. For this reason it is considered one of the most promising methods. Here, we propose an improvement of phylogenetic profiling that enables handling of large genomic datasets and infer global protein-protein interactions. This method uses the distance correlation as a new measure of phylogenetic profile similarity. We constructed robust reference sets and developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation that makes it applicable to large genomic data. Using Saccharomyces cerevisiae and Escherichia coli genome datasets, we showed that Phylo-dCor outperforms phylogenetic profiling methods previously described based on the mutual information and Pearson's correlation as measures of profile similarity. In this work, we constructed and assessed robust reference sets and propose the distance correlation as a measure for comparing phylogenetic profiles. To make it applicable to large genomic data, we developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation. Two R scripts that can be run on a wide range of machines are available upon request.
REPRESENTATIONS OF DISTANCE: DIFFERENCES IN UNDERSTANDING DISTANCE ACCORDING TO TRAVEL METHOD
Directory of Open Access Journals (Sweden)
Gunvor Riber Larsen
2017-12-01
Full Text Available This paper explores how Danish tourists represent distance in relation to their holiday mobility and how these representations of distance are a result of being aero-mobile as opposed to being land-mobile. Based on interviews with Danish tourists, whose holiday mobility ranges from the European continent to global destinations, the first part of this qualitative study identifies three categories of representations of distance that show how distance is being ‘translated’ by the tourists into non-geometric forms: distance as resources, distance as accessibility, and distance as knowledge. The representations of distance articulated by the Danish tourists show that distance is often not viewed in ‘just’ kilometres. Rather, it is understood in forms that express how transcending the physical distance through holiday mobility is dependent on individual social and economic contexts, and on whether the journey was undertaken by air or land. The analysis also shows that being aeromobile is the holiday transportation mode that removes the tourists the furthest away from physical distance, resulting in the distance travelled by air being represented in ways that have the least correlation, in the tourists’ minds, with physical distance measured in kilometres.
Method of measuring distance between fuel element
International Nuclear Information System (INIS)
Urata, Megumu.
1991-01-01
The distance between fuel elements contained in a pool is measured in a contactless manner even for a narrow distance less than 1 mm. That is, the equipment for measuring the distance between spent fuel elements of a spent fuel assembly in a nuclear reactor comprises a optical fiber scope, a lens, an industrial TV camera and a monitor TV. The top end of the optical fiber scope is inserted between fuel elements to be measured. The state thereof is displayed on the TV screen to measure the distance between the fuel elements. The measured results are compared with a previously formed calibration curve to determine the value between the fuel elements. Then, the distance between the fuel elements can be determined in the pool of a power plant without dismantling the fuel assembly, to investigate the state of the bending and estimate the fuel working life. (I.S.)
A Novel Method for Short Distance Measurements
International Nuclear Information System (INIS)
Fernandez, M.G.; Ferrando, A.; Josa, M.I.; Molinero, A.; Oller, J.C.; Arce, P.; Calvo, E.; Figueroa, C.F.; Garcia, C.F.; Rodigrido, T.; Vila, I.; Virto, A.L.
1998-01-01
A new, accurate and un expensive device for measuring short distances, intended for monitoring in LHC experiments is presented. Data taken with a very simple prototype are shown and performance is extracted. (Author) 4 refs
A unitary correlation operator method
International Nuclear Information System (INIS)
Feldmeier, H.; Neff, T.; Roth, R.; Schnack, J.
1997-09-01
The short range repulsion between nucleons is treated by a unitary correlation operator which shifts the nucleons away from each other whenever their uncorrelated positions are within the repulsive core. By formulating the correlation as a transformation of the relative distance between particle pairs, general analytic expressions for the correlated wave functions and correlated operators are given. The decomposition of correlated operators into irreducible n-body operators is discussed. The one- and two-body-irreducible parts are worked out explicitly and the contribution of three-body correlations is estimated to check convergence. Ground state energies of nuclei up to mass number A=48 are calculated with a spin-isospin-dependent potential and single Slater determinants as uncorrelated states. They show that the deduced energy-and mass-number-independent correlated two-body Hamiltonian reproduces all ''exact'' many-body calculations surprisingly well. (orig.)
Directory of Open Access Journals (Sweden)
Bhuwan Sareen
2015-01-01
Full Text Available Background and Aims: The optimal visualisation of vocal cords during fibreoptic intubation may be utilised for the nares-vocal cord distance (NVD estimation. The present study was conducted to measure NVD and to correlate with various external body parameters. Methods: This study was conducted on 50 males and 50 females. We measured NVD and analysed its relationship with height, nares to tragus of ear distance (NED, nares to angle of mandible distance (NMD, sternal length (SL, thyro-mental distance (TMD, sterno-mental distance (SMD and arm span (AS. Results: The mean NVD of the males was 18.5 ± 1.5 cm, and that of the females was 15.9 ± 1.1 cm. The relationship between the NVD and body height (males P = 0.001, r = 0.463, females P = 0.000, r = 0.555, SL (males P = 0.000, r = 0.463, females P < 0.000, r = 0.801 or AS (males P = 0.000, r = 0.561, females P = 0.000, r = 0.499 showed a significant correlation but NED, NMD, TMD, SMD did not. After combining male and female groups, (n = 100, the correlation of NVD with external body parameters is as follows SL (r = 0.887, height (r = 0.791, AS (r = 0.769, weight (r = 0.531, SMD (r = 0.466, NED (r = 0.459, NMD (r = 0.391, TMD (r = 0.379. Conclusion: The relationship of NVD to external body parameters had strong correlation in all parameters in the combined group; whereas when gender was taken into consideration NVD correlated significantly only with SL, height and AS.
Distance Measurement Methods for Improved Insider Threat Detection
Directory of Open Access Journals (Sweden)
Owen Lo
2018-01-01
Full Text Available Insider threats are a considerable problem within cyber security and it is often difficult to detect these threats using signature detection. Increasing machine learning can provide a solution, but these methods often fail to take into account changes of behaviour of users. This work builds on a published method of detecting insider threats and applies Hidden Markov method on a CERT data set (CERT r4.2 and analyses a number of distance vector methods (Damerau–Levenshtein Distance, Cosine Distance, and Jaccard Distance in order to detect changes of behaviour, which are shown to have success in determining different insider threats.
Correlation Between Cometary Gas/Dust Ratios and Heliocentric Distance
Harrington, Olga; Womack, Maria; Lastra, Nathan
2017-10-01
We compiled CO-based gas/dust ratios for several comets out to heliocentric distances, rh, of 8 au to probe whether there is a noticeable change in comet behavior over the range that water-ice sublimation starts. Previously, gas/dust ratios were calculated for an ensemble of comets using Q(CO2)/efp values derived from infrared measurements, which showed that the gas/dust ratio follows a rh-2 within 4 AU, but is flat at greater distances (Bauer et al. 2015). Our project focuses on gas/dust ratios for which CO is assumed to be the dominant gas, in order to test whether similar breaks in slope occur for CO. The gas/dust ratios were calculated from measurements of CO production rates (mostly from millimeter-wavelength spectroscopy) and reflected sunlight of comets (mostly via reported visual magnitudes of dusty comets). We present our new CO-based gas/dust ratios at different heliocentric distances, compare them to existing CO2-based gas/dust ratios, and discuss implications for CO-driven and CO2-driven activity. We discuss O.H. acknowledges support from the Hartmann Student Travel Grant program. M.W. acknowledges support from NSF grant AST-1615917.
Metric distances derived from cosine similarity and Pearson and Spearman correlations
van Dongen, Stijn; Enright, Anton J.
2012-01-01
We investigate two classes of transformations of cosine similarity and Pearson and Spearman correlations into metric distances, utilising the simple tool of metric-preserving functions. The first class puts anti-correlated objects maximally far apart. Previously known transforms fall within this class. The second class collates correlated and anti-correlated objects. An example of such a transformation that yields a metric distance is the sine function when applied to centered data.
Correlation of Spatially Filtered Dynamic Speckles in Distance Measurement Application
International Nuclear Information System (INIS)
Semenov, Dmitry V.; Nippolainen, Ervin; Kamshilin, Alexei A.; Miridonov, Serguei V.
2008-01-01
In this paper statistical properties of spatially filtered dynamic speckles are considered. This phenomenon was not sufficiently studied yet while spatial filtering is an important instrument for speckles velocity measurements. In case of spatial filtering speckle velocity information is derived from the modulation frequency of filtered light power which is measured by photodetector. Typical photodetector output is represented by a narrow-band random noise signal which includes non-informative intervals. Therefore more or less precious frequency measurement requires averaging. In its turn averaging implies uncorrelated samples. However, conducting research we found that correlation is typical property not only of dynamic speckle patterns but also of spatially filtered speckles. Using spatial filtering the correlation is observed as a response of measurements provided to the same part of the object surface or in case of simultaneously using several adjacent photodetectors. Found correlations can not be explained using just properties of unfiltered dynamic speckles. As we demonstrate the subject of this paper is important not only from pure theoretical point but also from the point of applied speckle metrology. E.g. using single spatial filter and an array of photodetector can greatly improve accuracy of speckle velocity measurements
Optimizing distance-based methods for large data sets
Scholl, Tobias; Brenner, Thomas
2015-10-01
Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.
Distance Based Method for Outlier Detection of Body Sensor Networks
Directory of Open Access Journals (Sweden)
Haibin Zhang
2016-01-01
Full Text Available We propose a distance based method for the outlier detection of body sensor networks. Firstly, we use a Kernel Density Estimation (KDE to calculate the probability of the distance to k nearest neighbors for diagnosed data. If the probability is less than a threshold, and the distance of this data to its left and right neighbors is greater than a pre-defined value, the diagnosed data is decided as an outlier. Further, we formalize a sliding window based method to improve the outlier detection performance. Finally, to estimate the KDE by training sensor readings with errors, we introduce a Hidden Markov Model (HMM based method to estimate the most probable ground truth values which have the maximum probability to produce the training data. Simulation results show that the proposed method possesses a good detection accuracy with a low false alarm rate.
A Method for Improving Galactic Cepheid Reddenings and Distances
Energy Technology Data Exchange (ETDEWEB)
Madore, Barry F. [The Observatories Carnegie Institution for Science 813 Santa Barbara St., Pasadena, CA 91101 (United States); Freedman, Wendy L.; Moak, Sandy, E-mail: barry.f.madore@gmail.com, E-mail: sandymoak@gmail.com, E-mail: wfreedman@uchicago.edu [Dept. of Astronomy and Astrophysics, University of Chicago, Chicago, IL (United States)
2017-06-10
We present a new photometric method by which improved high-precision reddenings and true distance moduli can be determined to individual Galactic Cepheids once distance measurements are available. We illustrate that the relative positioning of stars in the Cepheid period–luminosity (PL) relation (Leavitt law) is preserved as a function of wavelength. This information then provides a powerful constraint for determining reddenings to individual Cepheids, as well as their distances. As a first step, we apply this method to the 59 Cepheids in the compilation of Fouqué et al. Updated reddenings, distance moduli (or parallaxes), and absolute magnitudes in seven (optical through near-infrared) bands are given. From these intrinsic quantities, multiwavelength PL and color–color relations are derived. We find that the V -band period–luminosity–color relation has an rms scatter of only 0.06 mag, so that individual Cepheid distances can be measured to 3%, compared with dispersions of 6 to 13% for the one-parameter K through B PL relations, respectively. This method will be especially useful in conjunction with the new accurate parallax sample upcoming from Gaia .
Application of digital image correlation for long-distance bridge deflection measurement
Tian, Long; Pan, Bing; Cai, Youfa; Liang, Hui; Zhao, Yan
2013-06-01
Due to its advantages of non-contact, full-field and high-resolution measurement, digital image correlation (DIC) method has gained wide acceptance and found numerous applications in the field of experimental mechanics. In this paper, the application of DIC for real-time long-distance bridge deflection detection in outdoor environments is studied. Bridge deflection measurement using DIC in outdoor environments is more challenging than regular DIC measurements performed under laboratory conditions. First, much more image noise due to variations in ambient light will be presented in the images recorded in outdoor environments. Second, how to select the target area becomes a key factor because long-distance imaging results in a large field of view of the test object. Finally, the image acquisition speed of the camera must be high enough (larger than 100 fps) to capture the real-time dynamic motion of a bridge. In this work, the above challenging issues are addressed and several improvements were made to DIC method. The applicability was demonstrated by real experiments. Experimental results indicate that the DIC method has great potentials in motion measurement in various large building structures.
Strongly Correlated Systems Theoretical Methods
Avella, Adolfo
2012-01-01
The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...
Strongly correlated systems numerical methods
Mancini, Ferdinando
2013-01-01
This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...
Directory of Open Access Journals (Sweden)
Guttorm Raknes
Full Text Available We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
Raknes, Guttorm; Hunskaar, Steinar
2014-01-01
We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.
A distance limited method for sampling downed coarse woody debris
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams
2012-01-01
A new sampling method for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...
PYRAMID METHOD OF DISTANCE LEARNING IN HIGER EDUCATION
Directory of Open Access Journals (Sweden)
Дмитрий Васильевич Сенашенко
2017-12-01
Full Text Available The article deals with modern methods of distance learning in the corporate sector. On the specifics of the application of the described methods is their classification and be subject to review their specific differences based on the features and applications of these techniques given the characteristics of the organization of teaching in higher education, a conclusion about their preferred sides, which can be used in distance education. Later in the article, taking into account the above factors, it is proposed an innovative method of formation of educational programs. In view of the similarity of the rendered appearance of the pyramids, this technique proposed name “pyramid”. Offered by the authors, this technique is best synthesis of the best features of the previously described in the article for the online teaching methods. In the future, we are given a detailed description and conducted a preliminary analysis of the applicability of this technique to the training process in the Russian Federation. The analysis describes the eight alleged authors of distance education problems of high school that this method can help to solve.
Kong, Jing
This thesis includes 4 pieces of work. In Chapter 1, we present the work with a method for examining mortality as it is seen to run in families, and lifestyle factors that are also seen to run in families, in a subpopulation of the Beaver Dam Eye Study that has died by 2011. We find significant distance correlations between death ages, lifestyle factors, and family relationships. Considering only sib pairs compared to unrelated persons, distance correlation between siblings and mortality is, not surprisingly, stronger than that between more distantly related family members and mortality. Chapter 2 introduces a feature screening procedure with the use of distance correlation and covariance. We demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure based on distance correlation as a stopping criterion. The approach is further implemented to two real examples, namely the famous small round blue cell tumors data and the Cancer Genome Atlas ovarian cancer data Chapter 3 pays attention to the right censored human longevity data and the estimation of lifetime expectancy. We propose a general framework of backward multiple imputation for estimating the conditional lifetime expectancy function and the variance of the estimator in the right censoring setting and prove the properties of the estimator. In addition, we apply the method to the Beaver Dam eye study data to study human longevity, where the expected human lifetime are modeled with smoothing spline ANOVA based on the covariates including baseline age, gender, lifestyle factors and disease variables. Chapter 4 compares two imputation methods for right censored data, namely the famous Buckley-James estimator and the backward imputation method proposed in Chapter 3 and shows that backward imputation method is less biased and more robust with heterogeneity.
Precision lifetime measurements using the recoil distance method
International Nuclear Information System (INIS)
Kruecken, R.
2000-01-01
The recoil distance method (RDM) for the measurements of lifetimes of excited nuclear levels in the range from about 1 ps to 1,000 ps is reviewed. The New Yale Plunger Device for RDM experiments is introduced and the Differential Decay Curve Method for their analysis is reviewed. Results from recent RDM experiments on SD bands in the mass-190 region, shears bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. Perspectives for the use of RDM measurements in the study of neutron-rich nuclei are discussed
Precision Lifetime Measurements Using the Recoil Distance Method
Krücken, R.
2000-01-01
The recoil distance method (RDM) for the measurements of lifetimes of excited nuclear levels in the range from about 1 ps to 1000 ps is reviewed. The New Yale Plunger Device for RDM experiments is introduced and the Differential Decay Curve Method for their analysis is reviewed. Results from recent RDM experiments on SD bands in the mass-190 region, shears bands in the neutron deficient lead isotopes, and ground state bands in the mass-130 region are presented. Perspectives for the use of RDM measurements in the study of neutron-rich nuclei are discussed. PMID:27551587
Directory of Open Access Journals (Sweden)
Nur Ateyya Natasha Mohd Zali
2018-01-01
Full Text Available In edentulous treatment, relocation of anterior teeth in the preexisting natural position is the utmost importance. It is necessary to refer to the significant anatomical landmarks, one of them is incisive papilla. To make it more efficient both functionally and biologically, the teeth were arranged in particular geometric manner known as a dental arch. The author has chosen to conducted the research among the Malay race represented by the Malay undergraduate students. The purpose of this study was to evaluate the correlation between the distance of maxillary central incisors and incisive papilla (CI-IP in different arch form and gender. Maxillary impressions of 34 dentate individuals were taken, and the measurements were performed using a digital caliper. The results showed the CI-IP distance was ranging between 7.65 to 9.90 mm, with the average of 8.77 mm. There was no significant difference of the CI-IP distance between male and female regardless of their arch forms (p>0.05. Individuals with ovoid and tapered arch form, however, showed a significant difference of the CI-IP distance between male and female (p0.05. It can be concluded that gender factor was irrelevant towards the CI-IP distance regardless of the individual arch form. However, there was a correlation between the CI-IP distance in different arch forms in both male and female sample.
A new rapid method for rockfall energies and distances estimation
Giacomini, Anna; Ferrari, Federica; Thoeni, Klaus; Lambert, Cedric
2016-04-01
Rockfalls are characterized by long travel distances and significant energies. Over the last decades, three main methods have been proposed in the literature to assess the rockfall runout: empirical, process-based and GIS-based methods (Dorren, 2003). Process-based methods take into account the physics of rockfall by simulating the motion of a falling rock along a slope and they are generally based on a probabilistic rockfall modelling approach that allows for taking into account the uncertainties associated with the rockfall phenomenon. Their application has the advantage of evaluating the energies, bounce heights and distances along the path of a falling block, hence providing valuable information for the design of mitigation measures (Agliardi et al., 2009), however, the implementation of rockfall simulations can be time-consuming and data-demanding. This work focuses on the development of a new methodology for estimating the expected kinetic energies and distances of the first impact at the base of a rock cliff, subject to the conditions that the geometry of the cliff and the properties of the representative block are known. The method is based on an extensive two-dimensional sensitivity analysis, conducted by means of kinematic simulations based on probabilistic modelling of two-dimensional rockfall trajectories (Ferrari et al., 2016). To take into account for the uncertainty associated with the estimation of the input parameters, the study was based on 78400 rockfall scenarios performed by systematically varying the input parameters that are likely to affect the block trajectory, its energy and distance at the base of the rock wall. The variation of the geometry of the rock cliff (in terms of height and slope angle), the roughness of the rock surface and the properties of the outcropping material were considered. A simplified and idealized rock wall geometry was adopted. The analysis of the results allowed finding empirical laws that relate impact energies
Estimating adhesive seed-dispersal distances : field experiments and correlated random walks
Mouissie, AM; Lengkeek, W; van Diggelen, R
1. In this study we aimed to estimate distance distributions of adhesively dispersed seeds and the factors that determine them. 2. Seed attachment and detachment were studied using field experiments with a real sheep, a sheep dummy and a cattle dummy. Seed-retention data were used in correlated
Correlation methods in cutting arcs
Energy Technology Data Exchange (ETDEWEB)
Prevosto, L; Kelly, H, E-mail: prevosto@waycom.com.ar [Grupo de Descargas Electricas, Departamento Ing. Electromecanica, Universidad Tecnologica Nacional, Regional Venado Tuerto, Laprida 651, Venado Tuerto (2600), Santa Fe (Argentina)
2011-05-01
The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.
Correlation methods in cutting arcs
International Nuclear Information System (INIS)
Prevosto, L; Kelly, H
2011-01-01
The present work applies similarity theory to the plasma emanating from transferred arc, gas-vortex stabilized plasma cutting torches, to analyze the existing correlation between the arc temperature and the physical parameters of such torches. It has been found that the enthalpy number significantly influence the temperature of the electric arc. The obtained correlation shows an average deviation of 3% from the temperature data points. Such correlation can be used, for instance, to predict changes in the peak value of the arc temperature at the nozzle exit of a geometrically similar cutting torch due to changes in its operation parameters.
International Nuclear Information System (INIS)
Roga, W; Illuminati, F; Spehner, D
2016-01-01
We investigate and compare three distinguished geometric measures of bipartite quantum correlations that have been recently introduced in the literature: the geometric discord, the measurement-induced geometric discord, and the discord of response, each one defined according to three contractive distances on the set of quantum states, namely the trace, Bures, and Hellinger distances. We establish a set of exact algebraic relations and inequalities between the different measures. In particular, we show that the geometric discord and the discord of response based on the Hellinger distance are easy to compute analytically for all quantum states whenever the reference subsystem is a qubit. These two measures thus provide the first instance of discords that are simultaneously fully computable, reliable (since they satisfy all the basic Axioms that must be obeyed by a proper measure of quantum correlations), and operationally viable (in terms of state distinguishability). We apply the general mathematical structure to determine the closest classical-quantum state of a given state and the maximally quantum-correlated states at fixed global state purity according to the different distances, as well as a necessary condition for a channel to be quantumness breaking. (paper)
The relationship between glass ceiling and power distance as a cultural variable by a new method
Naide Jahangirov; Guler Saglam Ari; Seymur Jahangirov; Nuray Guneri Tosunoglu
2015-01-01
Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate ...
Fast Computing for Distance Covariance
Huo, Xiaoming; Szekely, Gabor J.
2014-01-01
Distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors. If the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is O($n^2$) which is a disadvantage compared to other faster methods. In this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be...
Distance-Based Phylogenetic Methods Around a Polytomy.
Davidson, Ruth; Sullivant, Seth
2014-01-01
Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.
Directory of Open Access Journals (Sweden)
Matthew R. Longo
2016-11-01
Full Text Available Both tactile distance perception and position sense are believed to require that immediate afferent signals be referenced to a stored representation of body size and shape (the body model. For both of these abilities, recent studies have reported that the stored body representations involved are highly distorted, at least in the case of the hand, with the hand dorsum represented as wider and squatter than it actually is. Here, we investigated whether individual differences in the magnitude of these distortions are shared between tactile distance perception and position sense, as would be predicted by the hypothesis that a single distorted body model underlies both tasks. We used established task to measure distortions of the represented shape of the hand dorsum. Consistent with previous results, in both cases there were clear biases to overestimate distances oriented along the medio-lateral axis of the hand compared to the proximo-distal axis. Moreover, within each task there were clear split-half correlations, demonstrating that both tasks show consistent individual differences. Critically, however, there was no correlation between the magnitudes of distortion in the two tasks. This casts doubt on the proposal that a common body model underlies both tactile distance perception and position sense.
Relative ordering of square-norm distance correlations in open quantum systems
International Nuclear Information System (INIS)
Wu Tao; Song Xue-Ke; Ye Liu
2014-01-01
We investigate the square-norm distance correlation dynamics of the Bell-diagonal states under different local decoherence channels, including phase flip, bit flip, and bit-phase flip channels by employing the geometric discord (GD) and its modified geometric discord (MGD), as the measures of the square-norm distance correlations. Moreover, an explicit comparison between them is made in detail. The results show that there is no distinct dominant relative ordering between them. Furthermore, we obtain that the GD just gradually deceases to zero, while MGD initially has a large freezing interval, and then suddenly changes in evolution. The longer the freezing interval, the less the MGD is. Interestingly, it is shown that the dynamic behaviors of the two geometric discords under the three noisy environments for the Werner-type initial states are the same. (general)
Lifetime measurements using the recoil distance method - achievements and perspectives
International Nuclear Information System (INIS)
Kruecken, R.
2001-01-01
The recoil distance method (RDM) for measuring pico-second nuclear level lifetimes and its use in nuclear structure studies is reviewed and perspectives for the future are presented. High precision measurements in the mass-130 region, studies of multi-phonon states in rare earth nuclei, the investigation of shape coexistence and the recently discovered phenomenon of 'magnetic rotation' are reviewed. Prospects for lifetime measurements in exotic regions of nuclei such as the measurement of lifetimes in neutron rich nuclei populated via spontaneous and heavy-ion induced fission are discussed. Other prospects include the use of the RDM technique in conjunction with recoil separators. The relevance of these techniques for experiments with radioactive ion beams will be discussed
Cognitive assessment in mathematics with the least squares distance method.
Ma, Lin; Çetin, Emre; Green, Kathy E
2012-01-01
This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.
Current correlators in QCD: Operator product expansion versus large distance dynamics
International Nuclear Information System (INIS)
Shevchenko, V.I.; Simonov, Yu.A.
2004-01-01
We analyze the structure of current-current correlators in coordinate space in the large N c limit when the corresponding spectral density takes the form of an infinite sum over hadron poles. The latter are computed in the QCD string model with quarks at the ends, including the lowest states, for all channels. The corresponding correlators demonstrate reasonable qualitative agreement with the lattice data without any additional fits. Different issues concerning the structure of the short-distance operator product expansion are discussed
Directory of Open Access Journals (Sweden)
J. Yazdani
2005-02-01
Full Text Available Statement of Problem: In spite of the limitations of Radiography, diagnosing of periodontal diseases without having accurate radiographs is inadequate because it provides a visible image of the supporting bone to the clinician and works as a fixed measure of the supporting bone during the study.Purpose: The aim of this study is to compare the precision of preiapical, bitewings and panoramicradiographs in determining the distance between the alveolar crest (AC and cementoenamel junction (CEJ of teeth. Materials and Methods: Statistically this is a survey study in which 120 interproximal surfaces of teeth were measured during surgery by periodontal probing and recorded as the actual measurement. Then 40 sites underwent bitewing, 40 sites preapical and 40 others panoramic radiography and the distance of CEJ up to the alveolar crest of bone was measured on them by periodontal probe and recorded. Then each group was analyzed separately and the Pearson's correlation coefficient was examined for the data.Results: The results of this study showed that when the thickness of the remaining bone in a millimeter limit is important for (he surgeon, the bitewing radiography has a prime importance, but when bone loss ismoderate, the panoramic radiography showing %89 of the cases close to the actual measure, can be acceptable. On the other hand, in anterior sites for determining the bone alteration, preiapical radiography with a 0.93 correlation coefficient is superior to the panoramic radiography with a correlation coefficient of 0.72 and we suggest it for examining the changes of bone in these sites. Conclusion: whenever the bone alteration is moderate or severe, it seems that, bitewing radiography is of particular importance, but when the bone loss is little, panoramic radiography can be used and there is no needto put the patient on unnecessary radiation.
Correlates of distances traveled to use recreational facilities for physical activity behaviors
Directory of Open Access Journals (Sweden)
Bulsara Max
2006-07-01
Full Text Available Abstract Background Information regarding how far people are willing to travel to use destinations for different types of recreational physical activity behaviors is limited. This study examines the demographic characteristics, neighborhood opportunity and specific-physical activity behaviors associated with distances traveled to destinations used for recreational physical activity. Methods A secondary analysis was undertaken of data (n = 1006 from a survey of Western Australian adults. Road network distances between respondents' homes and 1 formal recreational facilities; 2 beaches and rivers; and 3 parks and ovals used for physical activity were determined. Associations between distances to destinations and demographic characteristics, neighborhood opportunity (number of destinations within 1600 meters of household, and physical activity behaviors were examined. Results Overall, 56.3% of respondents had used a formal recreational facility, 39.9% a beach or river, and 38.7% a park or oval. The mean distance traveled to all destinations used for physical activity was 5463 ± 5232 meters (m. Distances traveled to formal recreational facilities, beaches and rivers, and parks and ovals differed depending on the physical activity undertaken. Younger adults traveled further than older adults (7311.8 vs. 6012.6 m, p = 0.03 to use beaches and rivers as did residents of socio-economically disadvantaged areas compared with those in advantaged areas (8118.0 vs. 7311.8 m, p = 0.02. Club members traveled further than non-members to use parks and ovals (4156.3 vs. 3351.6 meters, p = 0.02. The type of physical activity undertaken at a destination and number of neighborhood opportunities were also associated with distance traveled for all destination types. Conclusion The distances adults travel to a recreational facility depends on the demographic characteristics, destination type, physical activity behavior undertaken at that destination, and number of
Macia, Didac; Pujol, Jesus; Blanco-Hinojo, Laura; Martínez-Vilavella, Gerard; Martín-Santos, Rocío; Deus, Joan
2018-04-24
There is ample evidence from basic research in neuroscience of the importance of local cortico-cortical networks. Millimetric resolution is achievable with current functional MRI (fMRI) scanners and sequences, and consequently a number of "local" activity similarity measures have been defined to describe patterns of segregation and integration at this spatial scale. We have introduced the use of Iso-Distant local Average Correlation (IDAC), easily defined as the average fMRI temporal correlation of a given voxel with other voxels placed at increasingly separated iso-distant intervals, to characterize the curve of local fMRI signal similarities. IDAC curves can be statistically compared using parametric multivariate statistics. Furthermore, by using RGB color-coding to display jointly IDAC values belonging to three different distance lags, IDAC curves can also be displayed as multi-distance IDAC maps. We applied IDAC analysis to a sample of 41 subjects scanned under two different conditions, a resting state and an auditory-visual continuous stimulation. Multi-distance IDAC mapping was able to discriminate between gross anatomo-functional cortical areas and, moreover, was sensitive to modulation between the two brain conditions in areas known to activate and de-activate during audio-visual tasks. Unlike previous fMRI local similarity measures already in use, our approach draws special attention to the continuous smooth pattern of local functional connectivity.
New developments of the recoil distance doppler-shift method
Energy Technology Data Exchange (ETDEWEB)
Fransen, Christoph; Blazhev, Andrey; Braunroth, Thomas; Dewald, Alfred; Goldkuhle, Alina; Jolie, Jan; Litzinger, Julia; Mueller-Gatermann, Claus; Woelk, Dorothea; Zell, Karl-Oskar [Institut fuer Kernphysik, Universitaet zu Koeln (Germany)
2016-07-01
The recoil distance Doppler-shift (RDDS) method is a very valuable technique for measuring lifetimes of excited nuclear states in the picosecond range to deduce absolute transition strengths between nuclear excitations independent on the reaction mechanism. Dedicated plunger devices were built by our group for measurements with this method for a broad range of beam energies ranging from few MeV/u up to relativistic energies of the order of 100 MeV/u. Those were designed to match the constraints defined by state-of-the art γ-ray spectrometers like AGATA, Galileo, Gammasphere. Here we give an overview about recent experiments of our group to determine transition strengths from level lifetimes in exotic nuclei where also recoil separators or mass spectrographs were used for an identification of the recoiling reaction products. The aim is to learn about phenomena like shape phase coexistence in exotic regions and the evolution of the shell structure far from the valley of stability. We also review new plunger devices that are developed by our group for future experimental campaigns with stable and radioactive beams in different energy regimes, e.g., a plunger for HIE-ISOLDE.
Analysis of the Argonne distance tabletop exercise method.
Energy Technology Data Exchange (ETDEWEB)
Tanzman, E. A.; Nieves, L. A.; Decision and Information Sciences
2008-02-14
The purpose of this report is to summarize and evaluate the Argonne Distance Tabletop Exercise (DISTEX) method. DISTEX is intended to facilitate multi-organization, multi-objective tabletop emergency response exercises that permit players to participate from their own facility's incident command center. This report is based on experience during its first use during the FluNami 2007 exercise, which took place from September 19-October 17, 2007. FluNami 2007 exercised the response of local public health officials and hospitals to a hypothetical pandemic flu outbreak. The underlying purpose of the DISTEX method is to make tabletop exercising more effective and more convenient for playing organizations. It combines elements of traditional tabletop exercising, such as scenario discussions and scenario injects, with distance learning technologies. This distance-learning approach also allows playing organizations to include a broader range of staff in the exercise. An average of 81.25 persons participated in each weekly webcast session from all playing organizations combined. The DISTEX method required development of several components. The exercise objectives were based on the U.S. Department of Homeland Security's Target Capabilities List. The ten playing organizations included four public health departments and six hospitals in the Chicago area. An extent-of-play agreement identified the objectives applicable to each organization. A scenario was developed to drive the exercise over its five-week life. Weekly problem-solving task sets were designed to address objectives that could not be addressed fully during webcast sessions, as well as to involve additional playing organization staff. Injects were developed to drive play between webcast sessions, and, in some cases, featured mock media stories based in part on player actions as identified from the problem-solving tasks. The weekly 90-minute webcast sessions were discussions among the playing organizations
Brain dynamics that correlate with effects of learning on auditory distance perception
Directory of Open Access Journals (Sweden)
Matthew G. Wisniewski
2014-12-01
Full Text Available Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m and far (30-m distances. Listeners’ accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC processes identified in electroencephalographic (EEG data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4-8 Hz power (theta event-related synchronization; ERS that was smaller after training and largest for backwards speech. For a left temporal cluster, 8-12 Hz decreases in power (alpha event-related desynchronization; ERD were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10-16 Hz power (upper-alpha/low-beta ERS. The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.
Method of drying long-distance pipelines in sections
Energy Technology Data Exchange (ETDEWEB)
Steinhaus, H.; Meiners, D.
1989-04-11
This invention provides a method of drying long distance pipelines using a vacuum, and provides high-quality drying over the whole length of the pipeline in a manageable and easily followed process. Evacuation of the pipeline is effected by means of a vacuum pump located at least at one point of the section of pipeline. The section is subsequently scavenged or flooded with scavenging gas. After a predetermined reduced pressure is reached, and while the vacuum pump continues to draw off, a scavenging is effected from the end or ends remote from the evacuation point with a molar flow rate of the stream of scavenging gas that is equal to or less than the evacuation stream in throughput, at least initially. The scavenging is effected not from the evacuation point, but from a remote point, and is also effected with a feed speed or feed amount that is throttled at least initially. This ensures that no condensation occurs even in the inner walls of the pipeline.
The relationship between glass ceiling and power distance as a cultural variable by a new method
Directory of Open Access Journals (Sweden)
Naide Jahangirov
2015-12-01
Full Text Available Glass ceiling symbolizes a variety of barriers and obstacles that arise from gender inequality at business life. With this mind, culture influences gender dynamics. The purpose of this research was to examine the relationship between the glass ceiling and the power distance as a cultural variable within organizations. Gender variable is taken as a moderator variable in relationship between the concepts. In addition to conventional correlation analysis, we employed a new method to investigate this relationship in detail. The survey data were obtained from 109 people working at a research center which operated as a part of the non-profit private university in Ankara, Turkey. The relationship between the variables was revealed by a new method which was developed as an addition to the correlation in survey. The analysis revealed that the female staff perceived the glass ceiling and the power distance more intensely than the male staff. In addition, the medium level relation was determined between the power distance and the glass ceiling perception among female staff.
Directory of Open Access Journals (Sweden)
Mihaela Marin
2011-09-01
Full Text Available The aim of this study is to assess the influence of different interimplant distances on prosthetic complications in two implant mandibular overdenture treatments, as well as the possible correlations between such complications and some anatomic and functional individual aspect of the patients. Materials and method. An observational clinical study was conducted – between October 2008 – March 2010 - in the Clinics of Dental Prosthetics of the “Carol Davila” UMF of Bucuresti – on 32 completely edentulous patients, treated with 2 implant mandibular overdentures. The patients (24 women and 8 men, with ages between 49 and 83 years were divided into 2 groups, according to the position of the implants, inserted at the level of the lateral incisor (group 1 or in posterior position versus the canine (group 2. The prosthetic aspects, the peri-implant tissues and the anchoring systems were evaluated after 6 months and, respectively, 1 year of treatment, all prosthetic or biological complications, as well as the number of visits necessary for solving them being recorded. Results. As to the general characteristics of the group of patients, the observation was made that the average age of patients was of 63.8 years, most of them demonstrated a severe resorbtion of the crest, oval in shape in the mandibular frontal area, belonging mostly to the IInd hypo or hyper-divergent skeletal class, proportionally with and without a tendency towards propulsion. For both groups of patients, a total number of 114 prosthetic complications were registered after one year, the most frequent ones being occlusal problems (23.68%, the presence of decubitus lesions (21.05% and deactivation of matrices, accompanied by reduced retention (19.29%. Lower ratios were recorded for: the necessity of prosthesis relining (14%, loss of matrices (12.28%, fracturing of the prosthesis (8.77%, presence of peri-implant gingival hyperplasies (7% and loosening of the patrix screw (5
Evaluation of an inverse distance weighting method for patching ...
African Journals Online (AJOL)
2016-07-03
Jul 3, 2016 ... 1Agricultural Research Council – Institute for Soil, Climate and Water, .... local influence that diminishes with distance and weights change ..... most excessively high rainfall events are obtained from convective clouds which.
Directory of Open Access Journals (Sweden)
Weitere Markus
2009-06-01
Full Text Available Abstract Background Although some mechanisms of habitat adaptation of conspecific populations have been recently elucidated, the evolution of female preference has rarely been addressed as a force driving habitat adaptation in natural settings. Habitat adaptation of fire salamanders (Salamandra salamandra, as found in Middle Europe (Germany, can be framed in an explicit phylogeographic framework that allows for the evolution of habitat adaptation between distinct populations to be traced. Typically, females of S. salamandra only deposit their larvae in small permanent streams. However, some populations of the western post-glacial recolonization lineage use small temporary ponds as larval habitats. Pond larvae display several habitat-specific adaptations that are absent in stream-adapted larvae. We conducted mate preference tests with females from three distinct German populations in order to determine the influence of habitat adaptation versus neutral genetic distance on female mate choice. Two populations that we tested belong to the western post-glacial recolonization group, but are adapted to either stream or pond habitats. The third population is adapted to streams but represents the eastern recolonization lineage. Results Despite large genetic distances with FST values around 0.5, the stream-adapted females preferred males from the same habitat type regardless of genetic distance. Conversely, pond-adapted females did not prefer males from their own population when compared to stream-adapted individuals of either lineage. Conclusion A comparative analysis of our data showed that habitat adaptation rather than neutral genetic distance correlates with female preference in these salamanders, and that habitat-dependent female preference of a specific pond-reproducing population may have been lost during adaptation to the novel environmental conditions of ponds.
Directory of Open Access Journals (Sweden)
Jinhong Noh
2016-04-01
Full Text Available Obstacle avoidance methods require knowledge of the distance between a mobile robot and obstacles in the environment. However, in stochastic environments, distance determination is difficult because objects have position uncertainty. The purpose of this paper is to determine the distance between a robot and obstacles represented by probability distributions. Distance determination for obstacle avoidance should consider position uncertainty, computational cost and collision probability. The proposed method considers all of these conditions, unlike conventional methods. It determines the obstacle region using the collision probability density threshold. Furthermore, it defines a minimum distance function to the boundary of the obstacle region with a Lagrange multiplier method. Finally, it computes the distance numerically. Simulations were executed in order to compare the performance of the distance determination methods. Our method demonstrated a faster and more accurate performance than conventional methods. It may help overcome position uncertainty issues pertaining to obstacle avoidance, such as low accuracy sensors, environments with poor visibility or unpredictable obstacle motion.
Image correlation method for DNA sequence alignment.
Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván
2012-01-01
The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.
COOPERATIVE LEARNING IN DISTANCE LEARNING: A MIXED METHODS STUDY
Directory of Open Access Journals (Sweden)
Lori Kupczynski
2012-07-01
Full Text Available Distance learning has facilitated innovative means to include Cooperative Learning (CL in virtual settings. This study, conducted at a Hispanic-Serving Institution, compared the effectiveness of online CL strategies in discussion forums with traditional online forums. Quantitative and qualitative data were collected from 56 graduate student participants. Quantitative results revealed no significant difference on student success between CL and Traditional formats. The qualitative data revealed that students in the cooperative learning groups found more learning benefits than the Traditional group. The study will benefit instructors and students in distance learning to improve teaching and learning practices in a virtual classroom.
New method for distance-based close following safety indicator.
Sharizli, A A; Rahizar, R; Karim, M R; Saifizul, A A
2015-01-01
The increase in the number of fatalities caused by road accidents involving heavy vehicles every year has raised the level of concern and awareness on road safety in developing countries like Malaysia. Changes in the vehicle dynamic characteristics such as gross vehicle weight, travel speed, and vehicle classification will affect a heavy vehicle's braking performance and its ability to stop safely in emergency situations. As such, the aim of this study is to establish a more realistic new distance-based safety indicator called the minimum safe distance gap (MSDG), which incorporates vehicle classification (VC), speed, and gross vehicle weight (GVW). Commercial multibody dynamics simulation software was used to generate braking distance data for various heavy vehicle classes under various loads and speeds. By applying nonlinear regression analysis to the simulation results, a mathematical expression of MSDG has been established. The results show that MSDG is dynamically changed according to GVW, VC, and speed. It is envisaged that this new distance-based safety indicator would provide a more realistic depiction of the real traffic situation for safety analysis.
Correlation of Lip Prints with Gender, ABO Blood Groups and Intercommissural Distance.
Verma, Pradhuman; Sachdeva, Suresh K; Verma, Kanika Gupta; Saharan, Swati; Sachdeva, Kompal
2013-07-01
In forensics, the mouth allows for a myriad of possibilities. Lip print on glass or cigarette butt found at crime scenes may link to a suspect. Hence, a dentist has to actively play his role in personal identification and criminal investigation. To investigate the uniqueness of the lip print patterns in relation to gender, ABO blood groups and intercommissural distance (ICD). The study was conducted on 208 randomly selected students. The lip print of each subject was obtained and pattern was analyzed according to Tsuchihashi classification. The blood group and ICD at rest position was recorded for each. The study showed that Type II (branched) lip pattern to be most prominent. The B+ blood group was the most common in both genders and the ICD is higher in males. The lip print pattern does not show any correlation between ABO blood groups, gender, and ICD. The lip print pattern shows no correlation with gender, ABO blood groups, or ICD. Further studies with larger samples are required to obtain statistical significance of this correlation.
Generalized correlation integral vectors: A distance concept for chaotic dynamical systems
Energy Technology Data Exchange (ETDEWEB)
Haario, Heikki, E-mail: heikki.haario@lut.fi [School of Engineering Science, Lappeenranta University of Technology, Lappeenranta (Finland); Kalachev, Leonid, E-mail: KalachevL@mso.umt.edu [Department of Mathematical Sciences, University of Montana, Missoula, Montana 59812-0864 (United States); Hakkarainen, Janne [Earth Observation Unit, Finnish Meteorological Institute, Helsinki (Finland)
2015-06-15
Several concepts of fractal dimension have been developed to characterise properties of attractors of chaotic dynamical systems. Numerical approximations of them must be calculated by finite samples of simulated trajectories. In principle, the quantities should not depend on the choice of the trajectory, as long as it provides properly distributed samples of the underlying attractor. In practice, however, the trajectories are sensitive with respect to varying initial values, small changes of the model parameters, to the choice of a solver, numeric tolerances, etc. The purpose of this paper is to present a statistically sound approach to quantify this variability. We modify the concept of correlation integral to produce a vector that summarises the variability at all selected scales. The distribution of this stochastic vector can be estimated, and it provides a statistical distance concept between trajectories. Here, we demonstrate the use of the distance for the purpose of estimating model parameters of a chaotic dynamic model. The methodology is illustrated using computational examples for the Lorenz 63 and Lorenz 95 systems, together with a framework for Markov chain Monte Carlo sampling to produce posterior distributions of model parameters.
The correlation between cherry picking and the distance that consumers travel to do grocery shopping
Directory of Open Access Journals (Sweden)
Louise Van Scheers
2013-04-01
Full Text Available Retailers often use price promotions to discriminate between consumers who can shift purchases over time and those who cannot. Retailers consistently tend to charge lower prices than necessary, pricing defensively to prevent loyal customers from cherry picking, or shifting to competitors. Knowledge about cherry picking behaviour will enable retailers to obtain a higher share of disposable income from even price-sensitive shoppers, while at the same time charging higher prices. Recent studies indicate that effective cherry picking entails saving costs through price searching over time, price searching across stores, or both. This study examines the relationship between cherry picking and the distance that consumers travel to do grocery shopping. Interviews were conducted at ten different retail outlets over three days, and the results show that there is a highly significant correlation between cherry picking and the distance that consumers travel to do grocery shopping.These results should help retailers to benefit from cherry picking by taking a proactive approach to store switching and store location, two of the main influences on cherry picking behaviour.
Geerligs, Linda; Cam-Can; Henson, Richard N
2016-07-15
Studies of brain-wide functional connectivity or structural covariance typically use measures like the Pearson correlation coefficient, applied to data that have been averaged across voxels within regions of interest (ROIs). However, averaging across voxels may result in biased connectivity estimates when there is inhomogeneity within those ROIs, e.g., sub-regions that exhibit different patterns of functional connectivity or structural covariance. Here, we propose a new measure based on "distance correlation"; a test of multivariate dependence of high dimensional vectors, which allows for both linear and non-linear dependencies. We used simulations to show how distance correlation out-performs Pearson correlation in the face of inhomogeneous ROIs. To evaluate this new measure on real data, we use resting-state fMRI scans and T1 structural scans from 2 sessions on each of 214 participants from the Cambridge Centre for Ageing & Neuroscience (Cam-CAN) project. Pearson correlation and distance correlation showed similar average connectivity patterns, for both functional connectivity and structural covariance. Nevertheless, distance correlation was shown to be 1) more reliable across sessions, 2) more similar across participants, and 3) more robust to different sets of ROIs. Moreover, we found that the similarity between functional connectivity and structural covariance estimates was higher for distance correlation compared to Pearson correlation. We also explored the relative effects of different preprocessing options and motion artefacts on functional connectivity. Because distance correlation is easy to implement and fast to compute, it is a promising alternative to Pearson correlations for investigating ROI-based brain-wide connectivity patterns, for functional as well as structural data. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Directory of Open Access Journals (Sweden)
Flávia Melo Rodrigues
1998-06-01
Full Text Available Geographic structure of genetic distances among local populations within species, based on allozyme data, has usually been evaluated by estimating genetic distances clustered with hierarchical algorithms, such as the unweighted pair-group method by arithmetic averages (UPGMA. The distortion produced in the clustering process is estimated by the cophenetic correlation coefficient. This hierarchical approach, however, can fail to produce an accurate representation of genetic distances among populations in a low dimensional space, especially when continuous (clinal or reticulate patterns of variation exist. In the present study, we analyzed 50 genetic distance matrices from the literature, for animal taxa ranging from Platyhelminthes to Mammalia, in order to determine in which situations the UPGMA is useful to understand patterns of genetic variation among populations. The cophenetic correlation coefficients, derived from UPGMA based on three types of genetic distance coefficients, were correlated with other parameters of each matrix, including number of populations, loci, alleles, maximum geographic distance among populations, relative magnitude of the first eigenvalue of covariance matrix among alleles and logarithm of body size. Most cophenetic correlations were higher than 0.80, and the highest values appeared for Nei's and Rogers' genetic distances. The relationship between cophenetic correlation coefficients and the other parameters analyzed was defined by an "envelope space", forming triangles in which higher values of cophenetic correlations are found for higher values in the parameters, though low values do not necessarily correspond to high cophenetic correlations. We concluded that UPGMA is useful to describe genetic distances based on large distance matrices (both in terms of elevated number of populations or alleles, when dimensionality of the system is low (matrices with large first eigenvalues or when local populations are separated
On the problem of earthquake correlation in space and time over large distances
Georgoulas, G.; Konstantaras, A.; Maravelakis, E.; Katsifarakis, E.; Stylios, C. D.
2012-04-01
A quick examination of geographical maps with the epicenters of earthquakes marked on them reveals a strong tendency of these points to form compact clusters of irregular shapes and various sizes often traversing with other clusters. According to [Saleur et al. 1996] "earthquakes are correlated in space and time over large distances". This implies that seismic sequences are not formatted randomly but they follow a spatial pattern with consequent triggering of events. Seismic cluster formation is believed to be due to underlying geological natural hazards, which: a) act as the energy storage elements of the phenomenon, and b) tend to form a complex network of numerous interacting faults [Vallianatos and Tzanis, 1998]. Therefore it is imperative to "isolate" meaningful structures (clusters) in order to mine information regarding the underlying mechanism and at a second stage to test the causality effect implied by what is known as the Domino theory [Burgman, 2009]. Ongoing work by Konstantaras et al. 2011 and Katsifarakis et al. 2011 on clustering seismic sequences in the area of the Southern Hellenic Arc and progressively throughout the Greek vicinity and the entire Mediterranean region based on an explicit segmentation of the data based both on their temporal and spatial stamp, following modelling assumptions proposed by Dobrovolsky et al. 1989 and Drakatos et al. 2001, managed to identify geologically validated seismic clusters. These results suggest that that the time component should be included as a dimension during the clustering process as seismic cluster formation is dynamic and the emerging clusters propagate in time. Another issue that has not been investigated yet explicitly is the role of the magnitude of each seismic event. In other words the major seismic event should be treated differently compared to pre or post seismic sequences. Moreover the sometimes irregular and elongated shapes that appear on geophysical maps means that clustering algorithms
Elbakary, M. I.; Alam, M. S.; Aslan, M. S.
2008-03-01
In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Xiaobo Guo
Full Text Available Nonlinear dependence is general in regulation mechanism of gene regulatory networks (GRNs. It is vital to properly measure or test nonlinear dependence from real data for reconstructing GRNs and understanding the complex regulatory mechanisms within the cellular system. A recently developed measurement called the distance correlation (DC has been shown powerful and computationally effective in nonlinear dependence for many situations. In this work, we incorporate the DC into inferring GRNs from the gene expression data without any underling distribution assumptions. We propose three DC-based GRNs inference algorithms: CLR-DC, MRNET-DC and REL-DC, and then compare them with the mutual information (MI-based algorithms by analyzing two simulated data: benchmark GRNs from the DREAM challenge and GRNs generated by SynTReN network generator, and an experimentally determined SOS DNA repair network in Escherichia coli. According to both the receiver operator characteristic (ROC curve and the precision-recall (PR curve, our proposed algorithms significantly outperform the MI-based algorithms in GRNs inference.
Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick
2016-12-01
Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.
Assessment of Groundwater Chemical Quality, Using Inverse Distance Weighted Method
Directory of Open Access Journals (Sweden)
Sh. Ashraf
2013-04-01
Full Text Available An interpolation technique, ordinary Inverse Distance Weighted (IDW, was used to obtain the spatial distribution of groundwater quality parameters in Damghan plain of Iran. According to Scofield guidelines for TDS value, 60% of the water samples were harmful for irrigation purposes. Regarding to EC parameter, more than 60% of studied area was laid in bad range for irrigation purposes. The most dominant anion was Cl- and 10% of water samples showed a very hazardous class. According to Doneen guidelines for chloride value, 100% of collected water from the aquifer had slight to moderate problems for irrigation water purposes. The predominant cations in Damghan plain aquifer were according to Na+> Ca++> Mg++> K+. Sodium ion was the dominant cation and regarding to Na+ content guidelines, almost all groundwater samples had problem for foliar application. Calcium ion distribution was within usual range. The magnesium ion concentration is generally lower than sodium and calcium. The majority of the samples showed Mg++amount within usual range. Also K+ value ranged from 0.1 to 0.23 meq/L and all the water samples had potassium values within the permissible limit. Based on SAR criterion 80 % of collected water had slight to moderate problems. The SSP values were found from 2.87 to 6.87%. According to SAR value, thirty percent of ground water samples were doubtful class. The estimated amounts of RSC were ranged from 0.4-2 and based on RSC criterion, twenty percent of groundwater samples had slight to moderate problems.
Williams, C.J.; Heglund, P.J.
2009-01-01
Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.
Assessment of Groundwater Chemical Quality, Using Inverse Distance Weighted Method
Directory of Open Access Journals (Sweden)
Sh. Ashraf
2014-02-01
Full Text Available An interpolation technique, ordinary Inverse Distance Weighted (IDW, was used to obtain the spatial distribution of groundwater quality parameters in Damghan plain of Iran. According to Scofield guidelines for TDS value, 60% of the water samples were harmful for irrigation purposes. Regarding to EC parameter, more than 60% of studied area was laid in bad range for irrigation purposes. The most dominant anion was Cl- and 10% of water samples showed a very hazardous class. According to Doneen guidelines for chloride value, 100% of collected water from the aquifer had slight to moderate problems for irrigation water purposes. The predominant cations in Damghan plain aquifer were according to Na+> Ca++> Mg++> K+. Sodium ion was the dominant cation and regarding to Na+ content guidelines, almost all groundwater samples had problem for foliar application. Calcium ion distribution was within usual range. The magnesium ion concentration is generally lower than sodium and calcium. The majority of the samples showed Mg++amount within usual range. Also K+ value ranged from 0.1 to 0.23 meq/L and all the water samples had potassium values within the permissible limit. Based on SAR criterion 80 % of collected water had slight to moderate problems. The SSP values were found from 2.87 to 6.87%. According to SAR value, thirty percent of ground water samples were doubtful class. The estimated amounts of RSC were ranged from 0.4-2 and based on RSC criterion, twenty percent of groundwater samples had slight to moderate problems Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font
Directory of Open Access Journals (Sweden)
Lambert Marie-Ève
2012-06-01
Full Text Available Abstract Background Porcine reproductive and respiratory syndrome (PRRS is a viral disease that has a major economic impact for the swine industry. Its control is mostly directed towards preventing its spread which requires a better understanding of the mechanisms of transmission of the virus between herds. The objectives of this study were to describe the genetic diversity and to assess the correlation among genetic, Euclidean and temporal distances and ownership to better understand pathways of transmission. Results A cross-sectional study was conducted on sites located in a high density area of swine production in Quebec. Geographical coordinates (longitude/latitude, date of submission and ownership were obtained for each site. ORF5 sequencing was attempted on PRRSV positive sites. Proportion of pairwise combinations of strains having ≥98% genetic homology were analysed according to Euclidean distances and ownership. Correlations between genetic, Euclidean and temporal distances and ownership were assessed using Mantel tests on continuous and binary matrices. Sensitivity of the correlations between genetic and Euclidean as well as temporal distances was evaluated for different Euclidean and temporal distance thresholds. An ORF5 sequence was identified for 132 of the 176 (75% PRRSV positive sites; 122 were wild-type strains. The mean (min-max genetic, Euclidean and temporal pairwise distances were 11.6% (0–18.7, 15.0 km (0.04-45.7 and 218 days (0–852, respectively. Significant positive correlations were observed between genetic and ownership, genetic and Euclidean and between genetic and temporal binary distances. The relationship between genetic and ownership suggests either common sources of animals or semen, employees, technical services or vehicles, whereas that between genetic and Euclidean binary distances is compatible with area spread of the virus. The latter correlation was observed only up to 5 km. Conclusions This study
THE BOLOCAM GALACTIC PLANE SURVEY. VIII. A MID-INFRARED KINEMATIC DISTANCE DISCRIMINATION METHOD
Energy Technology Data Exchange (ETDEWEB)
Ellsworth-Bowers, Timothy P.; Glenn, Jason; Battersby, Cara; Ginsburg, Adam; Bally, John [CASA, University of Colorado, UCB 389, University of Colorado, Boulder, CO 80309 (United States); Rosolowsky, Erik [Department of Physics and Astronomy, University of British Columbia Okanagan, 3333 University Way, Kelowna, BC V1V 1V7 (Canada); Mairs, Steven [Department of Physics and Astronomy, University of Victoria, 3800 Finnerty Road, Victoria, BC V8P 1A1 (Canada); Evans, Neal J. II [Department of Astronomy, University of Texas, 1 University Station C1400, Austin, TX 78712 (United States); Shirley, Yancy L., E-mail: timothy.ellsworthbowers@colorado.edu [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States)
2013-06-10
We present a new distance estimation method for dust-continuum-identified molecular cloud clumps. Recent (sub-)millimeter Galactic plane surveys have cataloged tens of thousands of these objects, plausible precursors to stellar clusters, but detailed study of their physical properties requires robust distance determinations. We derive Bayesian distance probability density functions (DPDFs) for 770 objects from the Bolocam Galactic Plane Survey in the Galactic longitude range 7. Degree-Sign 5 {<=} l {<=} 65 Degree-Sign . The DPDF formalism is based on kinematic distances, and uses any number of external data sets to place prior distance probabilities to resolve the kinematic distance ambiguity (KDA) for objects in the inner Galaxy. We present here priors related to the mid-infrared absorption of dust in dense molecular regions and the distribution of molecular gas in the Galactic disk. By assuming a numerical model of Galactic mid-infrared emission and simple radiative transfer, we match the morphology of (sub-)millimeter thermal dust emission with mid-infrared absorption to compute a prior DPDF for distance discrimination. Selecting objects first from (sub-)millimeter source catalogs avoids a bias towards the darkest infrared dark clouds (IRDCs) and extends the range of heliocentric distance probed by mid-infrared extinction and includes lower-contrast sources. We derive well-constrained KDA resolutions for 618 molecular cloud clumps, with approximately 15% placed at or beyond the tangent distance. Objects with mid-infrared contrast sufficient to be cataloged as IRDCs are generally placed at the near kinematic distance. Distance comparisons with Galactic Ring Survey KDA resolutions yield a 92% agreement. A face-on view of the Milky Way using resolved distances reveals sections of the Sagittarius and Scutum-Centaurus Arms. This KDA-resolution method for large catalogs of sources through the combination of (sub-)millimeter and mid-infrared observations of molecular
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
Quasar Parallax: a Method for Determining Direct Geometrical Distances to Quasars
Elvis, Martin; Karovska, Margarita
2002-01-01
We describe a novel method to determine direct geometrical distances to quasars that can measure the cosmological constant, Lambda, with minimal assumptions. This method is equivalent to geometric parallax, with the `standard length' being the size of the quasar broad emission line region (BELR) as determined from the light travel time measurements of reverberation mapping. The effect of non-zero Lambda on angular diameter is large, 40% at z=2, so mapping angular diameter distances vs. redshi...
Gastroesophageal reflux - correlation between diagnostic methods
International Nuclear Information System (INIS)
Cruz, Maria das Gracas de Almeida; Penas, Maria Exposito; Fonseca, Lea Mirian Barbosa; Lemme, Eponina Maria O.; Martinho, Maria Jose Ribeiro
1999-01-01
A group of 97 individuals with typical symptoms of gastroesophageal reflux disease (GERD) was submitted to gastroesophageal reflux scintigraphy (GES) and compared to the results obtained from endoscopy, histopathology and 24 hours pHmetry. Twenty-four healthy individuals were used as a control group and they have done only the GERS. The results obtained showed that: a) the difference int he reflux index (RI) for the control group and the sick individuals was statistically significant (p < 0.0001); b) the correlation between GERS and the other methods showed the following results: sensitivity, 84%; specificity, 95%; positive predictive value, 98%; negative predictive value, 67%; accuracy, 87%. We have concluded that the scintigraphic method should be used to confirm the diagnosis of GERD and also recommended as initial investiative procedure. (author)
Laaksonen, Sauli; Jokelainen, Pikka; Pusenius, Jyrki; Oksanen, Antti
2017-03-15
Slaughter reindeer are exposed to stress caused by gathering, handling, loading and unloading, and by conditions in vehicles during transport. These stress factors can lead to compromised welfare and trauma such as bruises or fractures, aspiration of rumen content, and abnormal odour in carcasses, and causing condemnations in meat inspection and lower meat quality. We investigated the statistical association of slaughter transport distance with these indices using meat inspection data from years 2004-2016, including inspection of 669,738 reindeer originating from Finnish reindeer herding areas. Increased stress and decreased welfare of reindeer, as indicated by higher incidence of carcass condemnation due to bruises or fractures, aspiration of rumen content, or abnormal odour, were positively associated with systems involving shorter transport distances to abattoirs. Significant differences in incidence of condemnations were also detected between abattoirs and reindeer herding cooperatives. This study indicates that in particular the short-distance transports of reindeer merit more attention. While the results suggest that factors associated with long distance transport, such as driver education, truck design, veterinary supervision, and specialist equipment, may be favourable to reducing pre-slaughter stress in reindeer when compared with short distance transport systems, which occur in a variety of vehicle types and may be done by untrained handlers. Further work is required to elucidate the causal factors to the current results.
Directory of Open Access Journals (Sweden)
Ertekin Öztekin Öztekin
2015-12-01
Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.
Iterative method to compute the Fermat points and Fermat distances of multiquarks
International Nuclear Information System (INIS)
Bicudo, P.; Cardoso, M.
2009-01-01
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Iterative method to compute the Fermat points and Fermat distances of multiquarks
Energy Technology Data Exchange (ETDEWEB)
Bicudo, P. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)], E-mail: bicudo@ist.utl.pt; Cardoso, M. [CFTP, Departamento de Fisica, Instituto Superior Tecnico, Av. Rovisco Pais, 1049-001 Lisboa (Portugal)
2009-04-13
The multiquark confining potential is proportional to the total distance of the fundamental strings linking the quarks and antiquarks. We address the computation of the total string distance and of the Fermat points where the different strings meet. For a meson the distance is trivially the quark-antiquark distance. For a baryon the problem was solved geometrically from the onset by Fermat and by Torricelli, it can be determined just with a rule and a compass, and we briefly review it. However we also show that for tetraquarks, pentaquarks, hexaquarks, etc., the geometrical solution is much more complicated. Here we provide an iterative method, converging fast to the correct Fermat points and the total distances, relevant for the multiquark potentials.
Energy Technology Data Exchange (ETDEWEB)
Kozlowski, K.K. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Terras, V. [CNRS, ENS Lyon (France). Lab. de Physique
2010-12-15
We present a new method allowing us to derive the long-time and large-distance asymptotic behavior of the correlations functions of quantum integrable models from their exact representations. Starting from the form factor expansion of the correlation functions in finite volume, we explain how to reduce the complexity of the computation in the so-called interacting integrable models to the one appearing in free fermion equivalent models. We apply our method to the time-dependent zero-temperature current-current correlation function in the non-linear Schroedinger model and compute the first few terms in its asymptotic expansion. Our result goes beyond the conformal field theory based predictions: in the time-dependent case, other types of excitations than the ones on the Fermi surface contribute to the leading orders of the asymptotics. (orig.)
International Nuclear Information System (INIS)
Kozlowski, K.K.; Terras, V.
2010-12-01
We present a new method allowing us to derive the long-time and large-distance asymptotic behavior of the correlations functions of quantum integrable models from their exact representations. Starting from the form factor expansion of the correlation functions in finite volume, we explain how to reduce the complexity of the computation in the so-called interacting integrable models to the one appearing in free fermion equivalent models. We apply our method to the time-dependent zero-temperature current-current correlation function in the non-linear Schroedinger model and compute the first few terms in its asymptotic expansion. Our result goes beyond the conformal field theory based predictions: in the time-dependent case, other types of excitations than the ones on the Fermi surface contribute to the leading orders of the asymptotics. (orig.)
Practical method of calculating time-integrated concentrations at medium and large distances
International Nuclear Information System (INIS)
Cagnetti, P.; Ferrara, V.
1980-01-01
Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances
Genomic signal processing methods for computation of alignment-free distances from DNA sequences.
Borrayo, Ernesto; Mendizabal-Ruiz, E Gerardo; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Mendizabal, Adriana P; Morales, J Alejandro
2014-01-01
Genomic signal processing (GSP) refers to the use of digital signal processing (DSP) tools for analyzing genomic data such as DNA sequences. A possible application of GSP that has not been fully explored is the computation of the distance between a pair of sequences. In this work we present GAFD, a novel GSP alignment-free distance computation method. We introduce a DNA sequence-to-signal mapping function based on the employment of doublet values, which increases the number of possible amplitude values for the generated signal. Additionally, we explore the use of three DSP distance metrics as descriptors for categorizing DNA signal fragments. Our results indicate the feasibility of employing GAFD for computing sequence distances and the use of descriptors for characterizing DNA fragments.
Jealousy: novel methods and neural correlates.
Harmon-Jones, Eddie; Peterson, Carly K; Harris, Christine R
2009-02-01
Because of the difficulties surrounding the evocation of jealousy, past research has relied on reactions to hypothetical scenarios and recall of past experiences of jealousy. Both methodologies have limitations, however. The present research was designed to develop a method of evoking jealousy in the laboratory that would be well controlled, ethically permissible, and psychologically meaningful. Study 1 demonstrated that jealousy could be evoked in a modified version of K. D. Williams' (2007) Cyberball ostracism paradigm in which the rejecting person was computer-generated. Study 2, the first to examine neural activity during the active experience of jealousy, tested whether experienced jealousy was associated with greater relative left or right frontal cortical activation. The findings revealed that the experience of jealousy was correlated with greater relative left frontal cortical activation toward the "sexually" desired partner. This pattern of activation suggests that jealousy is associated with approach motivation. Taken together, the present studies developed a laboratory paradigm for the study of jealousy that should help foster research on one of the most social of emotions. (c) 2009 APA, all rights reserved
Revisitation of FRET methods to measure intraprotein distances in Human Serum Albumin
Energy Technology Data Exchange (ETDEWEB)
Santini, S.; Bizzarri, A.R.; Cannistraro, S., E-mail: cannistr@unitus.it
2016-11-15
We revisited the FRET methods to measure the intraprotein distance between Trp-214 (used as donor) of Human Serum Albumin and its Cys-34, labelled with 1.5-Iaedans (used as acceptor). Variation of Trp fluorescence emission in terms of both intensity and lifetime, as well the enhancement of the acceptor fluorescence emission upon Trp excitation, have been monitored. A careful statistical analysis of the fluorescence results from ten independently prepared samples, combined with suitable spectral corrections, provided reproducible distances estimations by each one of the three methods. Even if monitoring of the donor lifetime variation in the presence of the acceptor reproduces at the best the crystallographic data, by allowing even sub-nanometre distance variations to be appreciated, we suggest that a comparative analysis of all the three methods, applied with statistical significance, should be preferred to achieve a better reliability of the FRET technique.
International Nuclear Information System (INIS)
Mullaji, Arun; Shetty, Gautam M.; Kanna, Raj; Sharma, Amit
2010-01-01
The anterior superior iliac spine (ASIS) is commonly used to estimate the centre of the femoral head and assess limb alignment during surgical procedures. This study aimed to determine the range of inter-anterior superior iliac spine distances (IADs) and inter-femoral head centre distances (IFDs) among individuals and ascertain whether there is correlation between the IFD and the IAD. We also sought to determine whether gender, height and body mass index (BMI) had any influence on IAD and IFD. We prospectively measured IAD and IFD in 200 adults, using transverse computed tomography (CT) scans done for medical causes. We also calculated the distance between the pelvic midline and the centre of the femoral head (XY distance) from the measured IFD. The influence of gender, height and body-mass index on IAD and IFD, and the correlation of IAD with IFD, were also studied. The overall mean IAD, IFD and XY distances were 22.7 ± 1.6 cm, 16.0 ± 0.8 cm and 8.0 ± 0.4 cm, respectively. There was wide variation within the IAD range with 50% (100/200) of the subjects having their IAD within ±10 mm of the mean compared to 75.5% (151/200) of the subjects with IFD within ±10 mm of the mean. The probability that the mean XY distance would fall within 10 mm of the true femoral head centre in all subjects was 100%. The gender difference in IAD and IFD was statistically significant (P = 0.03 and P < 0.001, respectively), height and BMI had no influence, and the correlation of IAD with IFD was weak (0.35). Although the range of IADs showed wide variation among subjects, this study clearly demonstrated the narrow range of the XY distance and IFD in the study population and provides a useful and accurate basis for a new method to determine the femoral head centre clinically and intraoperatively. (orig.)
Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field
Directory of Open Access Journals (Sweden)
Yingmin Li
2010-01-01
Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.
The extraction of lifetimes of weakly-populated nuclear levels in recoil distance method experiments
International Nuclear Information System (INIS)
Kennedy, D.L.; Stuchbery, A.E.; Bolotin, H.H.
1979-01-01
Two analytic techniques are described which extend the conventional analysis of recoil-distance method (RDM) data. The first technique utilizes the enhanced counting statistics of the composite spectrum formed by the addition of all γ-ray spectra recorded at the different target-to-stopper distances employed, in order to extract the lifetimes of levels whose observed depopulating γ-ray transitions have insufficient statistics to permit conventional analysis. The second technique analyses peak centroids rather than peak areas to account for contamination by flight distance dependent background. The results from a recent study of the low-lying excited states in 196 198 Pt for those levels whose lifetimes could be extracted by conventional RDM analysis are shown to be in good agreement with those obtained using the new methods of analysis
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
Schwab, Karen Walker
2013-01-01
This paper is a literature review using the Douglas-Widavasky Grid/Group theory as a framework to examine, from a cross cultural perspective, preferred parental disciplinary methods. The four rival cultures defined in the Grid/Group theory mirror the cultural dimensions of individualism-collectivism and power distance described by Geert Hofstede.…
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine; Michael S. Williams
2013-01-01
Many new methods for sampling down coarse woody debris have been proposed in the last dozen or so years. One of the most promising in terms of field application, perpendicular distance sampling (PDS), has several variants that have been progressively introduced in the literature. In this study, we provide an overview of the different PDS variants and comprehensive...
Stuberg, W A; Colerick, V L; Blanke, D J; Bruce, W
1988-08-01
The purpose of this study was to compare a clinical gait analysis method using videography and temporal-distance measures with 16-mm cinematography in a gait analysis laboratory. Ten children with a diagnosis of cerebral palsy (means age = 8.8 +/- 2.7 years) and 9 healthy children (means age = 8.9 +/- 2.4 years) participated in the study. Stride length, walking velocity, and goniometric measurements of the hip, knee, and ankle were recorded using the two gait analysis methods. A multivariate analysis of variance was used to determine significant differences between the data collected using the two methods. Pearson product-moment correlation coefficients were determined to examine the relationship between the measurements recorded by the two methods. The consistency of performance of the subjects during walking was examined by intraclass correlation coefficients. No significant differences were found between the methods for the variables studied. Pearson product-moment correlation coefficients ranged from .79 to .95, and intraclass coefficients ranged from .89 to .97. The clinical gait analysis method was found to be a valid tool in comparison with 16-mm cinematography for the variables that were studied.
A spectral method to detect community structure based on distance modularity matrix
Yang, Jin-Xuan; Zhang, Xiao-Dong
2017-08-01
There are many community organizations in social and biological networks. How to identify these community structure in complex networks has become a hot issue. In this paper, an algorithm to detect community structure of networks is proposed by using spectra of distance modularity matrix. The proposed algorithm focuses on the distance of vertices within communities, rather than the most weakly connected vertex pairs or number of edges between communities. The experimental results show that our method achieves better effectiveness to identify community structure for a variety of real-world networks and computer generated networks with a little more time-consumption.
Skórnik-Pokarowska, Urszula; Orłowski, Arkadiusz
2004-12-01
We calculate the ultrametric distance between the pairs of stocks that belong to the same portfolio. The ultrametric distance allows us to distinguish groups of shares that are related. In this way, we can construct a portfolio taxonomy that can be used for constructing an efficient portfolio. We also construct a portfolio taxonomy based not only on stock prices but also on economic indices such as liquidity ratio, debt ratio and sales profitability ratio. We show that a good investment strategy can be obtained by applying to the portfolio chosen by the taxonomy method the so-called Constant Rebalanced Portfolio.
[Near infrared distance sensing method for Chang'e-3 alpha particle X-ray spectrometer].
Liang, Xiao-Hua; Wu, Ming-Ye; Wang, Huan-Yu; Peng, Wen-Xi; Zhang, Cheng-Mo; Cui, Xing-Zhu; Wang, Jin-Zhou; Zhang, Jia-Yu; Yang, Jia-Wei; Fan, Rui-Rui; Gao, Min; Liu, Ya-Qing; Zhang, Fei; Dong, Yi-Fan; Guo, Dong-Ya
2013-05-01
Alpha particle X-ray spectrometer (APXS) is one of the payloads of Chang'E-3 lunar rover, the scientific objective of which is in-situ observation and off-line analysis of lunar regolith and rock. Distance measurement is one of the important functions for APXS to perform effective detection on the moon. The present paper will first give a brief introduction to APXS, and then analyze the specific requirements and constraints to realize distance measurement, at last present a new near infrared distance sensing algorithm by using the inflection point of response curve. The theoretical analysis and the experiment results verify the feasibility of this algorithm. Although the theoretical analysis shows that this method is not sensitive to the operating temperature and reflectance of the lunar surface, the solar infrared radiant intensity may make photosensor saturation. The solutions are reducing the gain of device and avoiding direct exposure to sun light.
Directory of Open Access Journals (Sweden)
Xiao Luo
2017-01-01
Full Text Available An intuitionistic fuzzy VIKOR (IF-VIKOR method is proposed based on a new distance measure considering the waver of intuitionistic fuzzy information. The method aggregates all individual decision-makers’ assessment information based on intuitionistic fuzzy weighted averaging operator (IFWA, determines the weights of decision-makers and attributes objectively using intuitionistic fuzzy entropy, calculates the group utility and individual regret by the new distance measure, and then reaches a compromise solution. It can be effectively applied to multiattribute decision-making (MADM problems where the weights of decision-makers and attributes are completely unknown and the attribute values are intuitionistic fuzzy numbers (IFNs. The validity and stability of this method are verified by example analysis and sensitivity analysis, and its superiority is illustrated by the comparison with the existing method.
Song, Yoonah; Lee, Seunghun; Lee, Bong Gun; Joo, Young Bin; Song, Soon-Young
2018-01-01
To correlate the acromiohumeral distance (AHD) using tomosynthesis and rotator cuff (RC) pathology and various anatomical indices and to assess the diagnostic reproducibility of tomosynthesis for the evaluation of subacromial impingement. A retrospective review of 63 patients with clinically suspected subacromial impingement was conducted. Two musculoskeletal radiologists independently measured the following quantitative data: the AHD on plain radiographs and the AHD at three compartments (anterior, middle, and posterior) using tomosynthesis, computed tomography (CT) arthrography, or magnetic resonance (MR) arthrography. To investigate the association between the AHD and RC pathology and various anatomical indices, we reviewed the arthroscopic operation record as the referenced standard. The size of rotator cuff tear (RCT) in full-thickness tears displayed a significant inverse correlation with the middle and the posterior tomosynthetic AHDs ( p tomosynthesis, and CT or MR arthrography ( p tomosynthesis is reproducible compared with other modalities.
Using mark-recapture distance sampling methods on line transect surveys
Burt, Louise M.; Borchers, David L.; Jenkins, Kurt J.; Marques, Tigao A
2014-01-01
Mark–recapture distance sampling (MRDS) methods are widely used for density and abundance estimation when the conventional DS assumption of certain detection at distance zero fails, as they allow detection at distance zero to be estimated and incorporated into the overall probability of detection to better estimate density and abundance. However, incorporating MR data in DS models raises survey and analysis issues not present in conventional DS. Conversely, incorporating DS assumptions in MR models raises issues not present in conventional MR. As a result, being familiar with either conventional DS methods or conventional MR methods does not on its own put practitioners in good a position to apply MRDS methods appropriately. This study explains the sometimes subtly different varieties of MRDS survey methods and the associated concepts underlying MRDS models. This is done as far as possible without giving mathematical details – in the hope that this will make the key concepts underlying the methods accessible to a wider audience than if we were to present the concepts via equations.
LifePrint: a novel k-tuple distance method for construction of phylogenetic trees
Directory of Open Access Journals (Sweden)
Fabián Reyes-Prieto
2011-01-01
Full Text Available Fabián Reyes-Prieto1, Adda J García-Chéquer1, Hueman Jaimes-Díaz1, Janet Casique-Almazán1, Juana M Espinosa-Lara1, Rosaura Palma-Orozco2, Alfonso Méndez-Tenorio1, Rogelio Maldonado-Rodríguez1, Kenneth L Beattie31Laboratory of Biotechnology and Genomic Bioinformatics, Department of Biochemistry, National School of Biological Sciences, 2Superior School of Computer Sciences, National Polytechnic Institute, Mexico City, Mexico; 3Amerigenics Inc, Crossville, Tennessee, USAPurpose: Here we describe LifePrint, a sequence alignment-independent k-tuple distance method to estimate relatedness between complete genomes.Methods: We designed a representative sample of all possible DNA tuples of length 9 (9-tuples. The final sample comprises 1878 tuples (called the LifePrint set of 9-tuples; LPS9 that are distinct from each other by at least two internal and noncontiguous nucleotide differences. For validation of our k-tuple distance method, we analyzed several real and simulated viroid genomes. Using different distance metrics, we scrutinized diverse viroid genomes to estimate the k-tuple distances between these genomic sequences. Then we used the estimated genomic k-tuple distances to construct phylogenetic trees using the neighbor-joining algorithm. A comparison of the accuracy of LPS9 and the previously reported 5-tuple method was made using symmetric differences between the trees estimated from each method and a simulated “true” phylogenetic tree.Results: The identified optimal search scheme for LPS9 allows only up to two nucleotide differences between each 9-tuple and the scrutinized genome. Similarity search results of simulated viroid genomes indicate that, in most cases, LPS9 is able to detect single-base substitutions between genomes efficiently. Analysis of simulated genomic variants with a high proportion of base substitutions indicates that LPS9 is able to discern relationships between genomic variants with up to 40% of nucleotide
A Multiple Criteria Decision Making Method Based on Relative Value Distances
Directory of Open Access Journals (Sweden)
Shyur Huan-jyh
2015-12-01
Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.
Directory of Open Access Journals (Sweden)
Alejo J Irigoyen
Full Text Available Underwater visual census (UVC is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT, designed to maximize transect length (and thus the surveyed area with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST. Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively and the Distance Sampling (DS method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual
Limitations of correlation-based redatuming methods
Barrera P, D. F.; Schleicher, J.; van der Neut, J.
2017-12-01
Redatuming aims to correct seismic data for the consequences of an acquisition far from the target. That includes the effects of an irregular acquisition surface and of complex geological structures in the overburden such as strong lateral heterogeneities or layers with low or very high velocity. Interferometric techniques can be used to relocate sources to positions where only receivers are available and have been used to move acquisition geometries to the ocean bottom or transform data between surface-seismic and vertical seismic profiles. Even if no receivers are available at the new datum, the acquisition system can be relocated to any datum in the subsurface to which the propagation of waves can be modeled with sufficient accuracy. By correlating the modeled wavefield with seismic surface data, one can carry the seismic acquisition geometry from the surface closer to geologic horizons of interest. Specifically, we show the derivation and approximation of the one-sided seismic interferometry equation for surface-data redatuming, conveniently using Green’s theorem for the Helmholtz equation with density variation. Our numerical examples demonstrate that correlation-based single-boundary redatuming works perfectly in a homogeneous overburden. If the overburden is inhomogeneous, primary reflections from deeper interfaces are still repositioned with satisfactory accuracy. However, in this case artifacts are generated as a consequence of incorrectly redatumed overburden multiples. These artifacts get even worse if the complete wavefield is used instead of the direct wavefield. Therefore, we conclude that correlation-based interferometric redatuming of surface-seismic data should always be applied using direct waves only, which can be approximated with sufficient quality if a smooth velocity model for the overburden is available.
Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J
2016-03-01
Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.
Directory of Open Access Journals (Sweden)
Ju-Yeun Lee
Full Text Available The aim of this study was to investigate whether the limbus-insertion distance (LID of the lateral rectus (LR muscle can be a useful indicator for predicting the surgical effect of recession surgery in intermittent exotropia (IXT. Patients who underwent unilateral or bilateral LR recession for the basic type of IXT were included. The distance between the corneal limbus and the posterior edge of the insertion of LR muscle (limbus-insertion distance was measured intraoperatively using surgical calipers (graded with 0.25 mm precision. We calculated the actual dose-response effect as the difference between the angle of preoperative deviation and the angle of postoperative deviation, and then divided the figure by the total amount of recession at postoperative months 1, 3, and 6. The correlation between the limbus-insertion distance (LID of LR muscle and each dose-response effect was statistically analyzed. A total of 60 subjects were enrolled in this study. The mean LID of LR muscle was 5.8±0.7 mm. The dose-response effect was 3.2±1.0 prism diopters (PD/mm at postoperative month 1, 3.4±1.0 PD/mm at postoperative month 3, and 3.4±1.1 PD/mm at postoperative month 6. The LID of the LR muscle was significantly correlated with dose-response effects in cases of unilateral and bilateral LR recession at postoperative months 3 and 6 (P = 0.01, <0.01, 0.04 and <0.01 respectively. As the LID of the LR muscle increased by 1 mm, the dose-response effect increased by 0.2PD/mm in unilateral LR recession, and by 0.4 PD/mm in bilateral LR recession at postoperative month 6. In conclusion, the LID of the LR muscle can be used as one predictor of the recession effect to assist in surgical planning for IXT. Moreover, undercorrection at the time of LR recession might be considered in patients with long LID of the LR muscle.
A novel three-stage distance-based consensus ranking method
Aghayi, Nazila; Tavana, Madjid
2018-05-01
In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.
Recoil distance method lifetime measurements at TRIUMF-ISAC using the TIGRESS Integrated Plunger
Chester, A.; Ball, G. C.; Bernier, N.; Cross, D. S.; Domingo, T.; Drake, T. E.; Evitts, L. J.; Garcia, F. H.; Garnsworthy, A. B.; Hackman, G.; Hallam, S.; Henderson, J.; Henderson, R.; Krücken, R.; MacConnachie, E.; Moukaddam, M.; Padilla-Rodal, E.; Paetkau, O.; Pore, J. L.; Rizwan, U.; Ruotsalainen, P.; Shoults, J.; Smallcombe, J.; Smith, J. K.; Starosta, K.; Svensson, C. E.; Van Wieren, K.; Williams, J.; Williams, M.
2018-02-01
The TIGRESS Integrated Plunger device (TIP) has been developed for recoil distance method (RDM) lifetime measurements using the TIGRESS array of HPGe γ-ray detectors at TRIUMF's ISAC-II facility. A commissioning experiment was conducted utilizing a 250 MeV 84Kr beam at ≈ 2 × 108 particles per second. The 84Kr beam was Coulomb excited to the 21+ state on a movable 27Al target. A thin Cu foil fixed downstream from the target was used as a degrader. Excited nuclei emerged from the target and decayed by γ-ray emission at a distance determined by their velocity and the lifetime of the 21+ state. The ratio of decays which occur between the target and degrader to those occurring after traversing the degrader changes as a function of the target-degrader separation distance. Gamma-ray spectra at 13 target-degrader separation distances were measured and compared to simulated lineshapes to extract the lifetime. The result of τ = 5 . 541 ± 0 . 013(stat.) ± 0 . 063(sys.) ps is shorter than the literature value of 5 . 84 ± 0 . 18 ps with a reduction in uncertainty by a factor of approximately two. The TIP plunger device, experimental technique, analysis tools, and result are discussed.
An Initialization Method Based on Hybrid Distance for k-Means Algorithm.
Yang, Jie; Ma, Yan; Zhang, Xiangfen; Li, Shunbao; Zhang, Yuping
2017-11-01
The traditional [Formula: see text]-means algorithm has been widely used as a simple and efficient clustering method. However, the performance of this algorithm is highly dependent on the selection of initial cluster centers. Therefore, the method adopted for choosing initial cluster centers is extremely important. In this letter, we redefine the density of points according to the number of its neighbors, as well as the distance between points and their neighbors. In addition, we define a new distance measure that considers both Euclidean distance and density. Based on that, we propose an algorithm for selecting initial cluster centers that can dynamically adjust the weighting parameter. Furthermore, we propose a new internal clustering validation measure, the clustering validation index based on the neighbors (CVN), which can be exploited to select the optimal result among multiple clustering results. Experimental results show that the proposed algorithm outperforms existing initialization methods on real-world data sets and demonstrates the adaptability of the proposed algorithm to data sets with various characteristics.
Andrusyszyn, M A; Cragg, C E; Humbert, J
2001-04-01
The relationships among multiple distance delivery methods, preferred learning style, content, and achievement was sought for primary care nurse practitioner students. A researcher-designed questionnaire was completed by 86 (71%) participants, while 6 engaged in follow-up interviews. The results of the study included: participants preferred learning by "considering the big picture"; "setting own learning plans"; and "focusing on concrete examples." Several positive associations were found: learning on own with learning by reading, and setting own learning plans; small group with learning through discussion; large group with learning new things through hearing and with having learning plans set by others. The most preferred method was print-based material and the least preferred method was audio tape. The most suited method for content included video teleconferencing for counseling, political action, and transcultural issues; and video tape for physical assessment. Convenience, self-direction, and timing of learning were more important than delivery method or learning style. Preferred order of learning was reading, discussing, observing, doing, and reflecting. Recommended considerations when designing distance courses include a mix of delivery methods, specific content, outcomes, learner characteristics, and state of technology.
Distance determination method of dust particles using Rosetta OSIRIS NAC and WAC data
Drolshagen, E.; Ott, T.; Koschny, D.; Güttler, C.; Tubiana, C.; Agarwal, J.; Sierks, H.; Barbieri, C.; Lamy, P. I.; Rodrigo, R.; Rickman, H.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Bertini, I.; Cremonese, G.; da Deppo, V.; Davidsson, B.; Debei, S.; de Cecco, M.; Deller, J.; Feller, C.; Fornasier, S.; Fulle, M.; Gicquel, A.; Groussin, O.; Gutiérrez, P. J.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Keller, H. U.; Knollenberg, J.; Kramm, J. R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Lopez Moreno, J. J.; Marzari, F.; Naletto, G.; Oklay, N.; Shi, X.; Thomas, N.; Poppe, B.
2017-09-01
The ESA Rosetta spacecraft has been tracking its target, the Jupiter-family comet 67P/Churyumov-Gerasimenko, in close vicinity for over two years. It hosts the OSIRIS instruments: the Optical, Spectroscopic, and Infrared Remote Imaging System composed of two cameras, see e.g. Keller et al. (2007). In some imaging sequences dedicated to observe dust particles in the comet's coma, the two cameras took images at the same time. The aim of this work is to use these simultaneous double camera observations to calculate the dust particles' distance to the spacecraft. As the two cameras are mounted on the spacecraft with an offset of 70 cm, the distance of particles observed by both cameras can be determined by a shift of the particles' apparent trails on the images. This paper presents first results of the ongoing work, introducing the distance determination method for the OSIRIS instrument and the analysis of an example particle. We note that this method works for particles in the range of about 500-6000 m from the spacecraft.
Precision Distances with the Tip of the Red Giant Branch Method
Beaton, Rachael Lynn; Carnegie-Chicago Hubble Program Team
2018-01-01
The Carnegie-Chicago Hubble Program aims to construct a distance ladder that utilizes old stellar populations in the outskirts of galaxies to produce a high precision measurement of the Hubble Constant that is independent of Cepheids. The CCHP uses the tip of the red giant branch (TRGB) method, which is a statistical measurement technique that utilizes the termination of the red giant branch. Two innovations combine to make the TRGB a competitive route to the Hubble Constant (i) the large-scale measurement of trigonometric parallax by the Gaia mission and (ii) the development of both precise and accurate means of determining the TRGB in both nearby (~1 Mpc) and distant (~20 Mpc) galaxies. Here I will summarize our progress in developing these standardized techniques, focusing on both our edge-detection algorithm and our field selection strategy. Using these methods, the CCHP has determined equally precise (~2%) distances to galaxies in the Local Group (< 1 Mpc) and across the Local Volume (< 20 Mpc). The TRGB is, thus, an incredibly powerful and straightforward means to determine distances to galaxies of any Hubble Type and, thus, has enormous potential for putting any number of astrophyiscal phenomena on absolute units.
Directory of Open Access Journals (Sweden)
Hujun He
2017-01-01
Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.
Methods to ensure optimal off-bottom and drill bit distance under pellet impact drilling
Kovalyov, A. V.; Isaev, Ye D.; Vagapov, A. R.; Urnish, V. V.; Ulyanova, O. S.
2016-09-01
The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rock for various purposes. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The paper presents the survey of methods ensuring an optimal off-bottom and a drill bit distance. The analysis of methods shows that the issue is topical and requires further research.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
A new method to determine large scale structure from the luminosity distance
International Nuclear Information System (INIS)
Romano, Antonio Enea; Chiang, Hsu-Wen; Chen, Pisin
2014-01-01
The luminosity distance can be used to determine the properties of large scale structure around the observer. To this purpose we develop a new inversion method to map luminosity distance to a Lemaitre–Tolman–Bondi (LTB) metric based on the use of the exact analytical solution for Einstein equations. The main advantages of this approach are an improved numerical accuracy and stability, an exact analytical setting of the initial conditions for the differential equations which need to be solved and the validity for any sign of the functions determining the LTB geometry. Given the fully analytical form of the differential equations, this method also simplifies the calculation of the red-shift expansion around the apparent horizon point where the numerical solution becomes unstable. We test the method by inverting the supernovae Ia luminosity distance function corresponding to the best fit ΛCDM model. We find that only a limited range of initial conditions is compatible with observations, or a transition from red to blue-shift can occur at relatively low red-shift. Despite LTB solutions without a cosmological constant have been shown not to be compatible with all different set of available observational data, those studies normally fit data assuming a special functional ansatz for the inhomogeneity profile, which often depend only on few parameters. Inversion methods on the contrary are able to fully explore the freedom in fixing the functions which determine a LTB solution. Another important possible application is not about LTB solutions as cosmological models, but rather as tools to study the effects on the observations made by a generic observer located in an inhomogeneous region of the Universe where a fully non perturbative treatment involving exact solutions of Einstein equations is required. (paper)
IMPLEMENTATION OF THE CASE STUDY METHOD IN DISTANCE LEARNING OF ENTREPRENEURSHIP FUNDAMENTALS
Directory of Open Access Journals (Sweden)
O. R. Chepyuk
2016-01-01
Full Text Available The aim of the presented publication is to show new opportunities of application of a a case study method (educational situations in modern edu cational process of the higher school in general, and in particular – in teaching fundementals of business and economy wherein this method has gained special popularity.Methods and results. By means of methods of aggregation, deduction and logical synthesis, the authors developed the principles of the organization of distance training of economic disciplines on the basis of the case study method. The structure of educational cases is designated; the standard set of the materials accompanying them is designed. These practical problems were solved within implementation of the project Tempus «Acquisition of Professional and Entrepreneurial Skills by means of Education of Entrepreneurial Spirit and Consultation of the Beginning Entrepreneurs». Possible types and forms of cases were studied; several options of adaptation of their content to the electronic training environment which possesses both restrictions, and extensive additional educational potential are allocated. Various types of cases are shown based on specific examples: illustrating processes and concepts; imitating sample processes; describing original situations in real business and having decisions which are already realized in practice; cases with an uncertain answer to the asked problematic issue. The choice of this or that type of case study tasks is determined by the educational purposes and necessary level of development of a discipline. Cases supplement each other when forming the fund of evaluative means.Scientific novelty. The majority of researches define the case study method as a group discussion in the educational purposes of any problem situation and collective search of its decision, i.e. application of this method assumes classroom full-time courses. The question of use the case study method in a distance format for individual
International Nuclear Information System (INIS)
Kohli, Vandana; King, Micgael A.; Glick, Stephen J.; Pan, Tin-Su
1998-01-01
The goal of this investigation was to compare resolution recovery versus noise level of two methods for compensation of distance-dependent resolution (DDR) in SPECT imaging. The two methods of compensation were restoration filtering based on the frequency-distance relationship (FDR) prior to iterative reconstruction, and modelling DDR in the projector/backprojector pair employed in iterative reconstruction. FDR restoration filtering was computationally faster than modelling the detector response in iterative reconstruction. Using Gaussian diffusion to model the detector response in iterative reconstruction sped up the process by a factor of 2.5 over frequency domain filtering in the projector/backprojector pair. Gaussian diffusion modelling resulted in a better resolution versus noise tradeoff than either FDR restoration filtering or solely modelling attenuation in the projector/backprojector pair of iterative reconstruction. For the pixel size investigated herein (0.317 cm), accounting for DDR in the projector/backprojector pair by Gaussian diffusion, or by applying a blurring function based on the distance from the face of the collimator at each distance, resulted in very similar resolution recovery and slice noise level. (author)
Edwards, Jonathan; Lallier, Florent; Caumon, Guillaume; Carpentier, Cédric
2018-02-01
We discuss the sampling and the volumetric impact of stratigraphic correlation uncertainties in basins and reservoirs. From an input set of wells, we evaluate the probability for two stratigraphic units to be associated using an analog stratigraphic model. In the presence of multiple wells, this method sequentially updates a stratigraphic column defining the stratigraphic layering for each possible set of realizations. The resulting correlations are then used to create stratigraphic grids in three dimensions. We apply this method on a set of synthetic wells sampling a forward stratigraphic model built with Dionisos. To perform cross-validation of the method, we introduce a distance comparing the relative geological time of two models for each geographic position, and we compare the models in terms of volumes. Results show the ability of the method to automatically generate stratigraphic correlation scenarios, and also highlight some challenges when sampling stratigraphic uncertainties from multiple wells.
Dawson, Debra Ann; Lam, Jack; Lewis, Lindsay B; Carbonell, Felix; Mendola, Janine D; Shmuel, Amir
2016-02-01
Numerous studies have demonstrated functional magnetic resonance imaging (fMRI)-based resting-state functional connectivity (RSFC) between cortical areas. Recent evidence suggests that synchronous fluctuations in blood oxygenation level-dependent fMRI reflect functional organization at a scale finer than that of visual areas. In this study, we investigated whether RSFCs within and between lower visual areas are retinotopically organized and whether retinotopically organized RSFC merely reflects cortical distance. Subjects underwent retinotopic mapping and separately resting-state fMRI. Visual areas V1, V2, and V3, were subdivided into regions of interest (ROIs) according to quadrants and visual field eccentricity. Functional connectivity (FC) was computed based on Pearson's linear correlation (correlation), and Pearson's linear partial correlation (correlation between two time courses after the time courses from all other regions in the network are regressed out). Within a quadrant, within visual areas, all correlation and nearly all partial correlation FC measures showed statistical significance. Consistently in V1, V2, and to a lesser extent in V3, correlation decreased with increasing eccentricity separation. Consistent with previously reported monkey anatomical connectivity, correlation/partial correlation values between regions from adjacent areas (V1-V2 and V2-V3) were higher than those between nonadjacent areas (V1-V3). Within a quadrant, partial correlation showed consistent significance between regions from two different areas with the same or adjacent eccentricities. Pairs of ROIs with similar eccentricity showed higher correlation/partial correlation than pairs distant in eccentricity. Between dorsal and ventral quadrants, partial correlation between common and adjacent eccentricity regions within a visual area showed statistical significance; this extended to more distant eccentricity regions in V1. Within and between quadrants, correlation decreased
Finite element formulation for a digital image correlation method
International Nuclear Information System (INIS)
Sun Yaofeng; Pang, John H. L.; Wong, Chee Khuen; Su Fei
2005-01-01
A finite element formulation for a digital image correlation method is presented that will determine directly the complete, two-dimensional displacement field during the image correlation process on digital images. The entire interested image area is discretized into finite elements that are involved in the common image correlation process by use of our algorithms. This image correlation method with finite element formulation has an advantage over subset-based image correlation methods because it satisfies the requirements of displacement continuity and derivative continuity among elements on images. Numerical studies and a real experiment are used to verify the proposed formulation. Results have shown that the image correlation with the finite element formulation is computationally efficient, accurate, and robust
Shaw, Andrew J; Ingham, Stephen A; Atkinson, Greg; Folland, Jonathan P
2015-01-01
A positive relationship between running economy and maximal oxygen uptake (V̇O2max) has been postulated in trained athletes, but previous evidence is equivocal and could have been confounded by statistical artefacts. Whether this relationship is preserved in response to running training (changes in running economy and V̇O2max) has yet to be explored. This study examined the relationships of (i) running economy and V̇O2max between runners, and (ii) the changes in running economy and V̇O2max that occur within runners in response to habitual training. 168 trained distance runners (males, n = 98, V̇O2max 73.0 ± 6.3 mL∙kg-1∙min-1; females, n = 70, V̇O2max 65.2 ± 5.9 mL kg-1∙min-1) performed a discontinuous submaximal running test to determine running economy (kcal∙km-1). A continuous incremental treadmill running test to volitional exhaustion was used to determine V̇O2max 54 participants (males, n = 27; females, n = 27) also completed at least one follow up assessment. Partial correlation analysis revealed small positive relationships between running economy and V̇O2max (males r = 0.26, females r = 0.25; Peconomy and V̇O2max in response to habitual training (r = 0.35; Peconomy and V̇O2max in highly trained distance runners. With >85% of the variance in these parameters unexplained by this relationship, these findings reaffirm that running economy and V̇O2max are primarily determined independently.
Directory of Open Access Journals (Sweden)
Peek-Asa Corinne
2011-01-01
Full Text Available Abstract Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM, which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks within each participant's reported region (ZIP Code, weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV. We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance
2011-01-01
Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM), which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks) within each participant's reported region (ZIP Code), weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV). We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance estimates. The algorithm
Nuclear spin measurement using the angular correlation method
International Nuclear Information System (INIS)
Schapira, J.-P.
The double angular correlation method is defined by a semi-classical approach (Biendenharn). The equivalence formula in quantum mechanics are discussed for coherent and incoherent angular momentum mixing; the correlations are described from the density and efficiency matrices (Fano). The ambiguities in double angular correlations can be sometimes suppressed (emission of particles with a high orbital momentum l), using triple correlations between levels with well defined spin and parity. Triple correlations are applied to the case where the direction of linear polarization of γ-rays is detected [fr
Fekete, Gábor; Fodor, Emese; Pesznyák, Csilla
2015-03-08
A novel method has been put forward for very large electron beam profile measurement. With this method, absorbed dose profiles can be measured at any depth in a solid phantom for total skin electron therapy. Electron beam dose profiles were collected with two different methods. Profile measurements were performed at 0.2 and 1.2 cm depths with a parallel plate and a thimble chamber, respectively. 108cm × 108 cm and 45 cm × 45 cm projected size electron beams were scanned by vertically moving phantom and detector at 300 cm source-to-surface distance with 90° and 270° gantry angles. The profiles collected this way were used as reference. Afterwards, the phantom was fixed on the central axis and the gantry was rotated with certain angular steps. After applying correction for the different source-to-detector distances and incidence of angle, the profiles measured in the two different setups were compared. Correction formalism has been developed. The agreement between the cross profiles taken at the depth of maximum dose with the 'classical' scanning and with the new moving gantry method was better than 0.5 % in the measuring range from zero to 71.9 cm. Inverse square and attenuation corrections had to be applied. The profiles measured with the parallel plate chamber agree better than 1%, except for the penumbra region, where the maximum difference is 1.5%. With the moving gantry method, very large electron field profiles can be measured at any depth in a solid phantom with high accuracy and reproducibility and with much less time per step. No special instrumentation is needed. The method can be used for commissioning of very large electron beams for computer-assisted treatment planning, for designing beam modifiers to improve dose uniformity, and for verification of computed dose profiles.
Directory of Open Access Journals (Sweden)
Amalia SAPRIATI
2010-04-01
Full Text Available This paper addresses the use of computer-based testing in distance education, based on the experience of Universitas Terbuka (UT, Indonesia. Computer-based testing has been developed at UT for reasons of meeting the specific needs of distance students as the following: Ø students’ inability to sit for the scheduled test, Ø conflicting test schedules, and Ø students’ flexibility to take examination to improve their grades. In 2004, UT initiated a pilot project in the development of system and program for computer-based testing method. Then in 2005 and 2006 tryouts in the use of computer-based testing methods were conducted in 7 Regional Offices that were considered as having sufficient supporting recourses. The results of the tryouts revealed that students were enthusiastic in taking computer-based tests and they expected that the test method would be provided by UT as alternative to the traditional paper and pencil test method. UT then implemented computer-based testing method in 6 and 12 Regional Offices in 2007 and 2008 respectively. The computer-based testing was administered in the city of the designated Regional Office and was supervised by the Regional Office staff. The development of the computer-based testing was initiated with conducting tests using computers in networked configuration. The system has been continually improved, and it currently uses devices linked to the internet or the World Wide Web. The construction of the test involves the generation and selection of the test items from the item bank collection of the UT Examination Center. Thus the combination of the selected items compromises the test specification. Currently UT has offered 250 courses involving the use of computer-based testing. Students expect that more courses are offered with computer-based testing in Regional Offices within easy access by students.
ACCELERATION RENDERING METHOD ON RAY TRACING WITH ANGLE COMPARISON AND DISTANCE COMPARISON
Directory of Open Access Journals (Sweden)
Liliana liliana
2007-01-01
Full Text Available In computer graphics applications, to produce realistic images, a method that is often used is ray tracing. Ray tracing does not only model local illumination but also global illumination. Local illumination count ambient, diffuse and specular effects only, but global illumination also count mirroring and transparency. Local illumination count effects from the lamp(s but global illumination count effects from other object(s too. Objects that are usually modeled are primitive objects and mesh objects. The advantage of mesh modeling is various, interesting and real-like shape. Mesh contains many primitive objects like triangle or square (rare. A problem in mesh object modeling is long rendering time. It is because every ray must be checked with a lot of triangle of the mesh. Added by ray from other objects checking, the number of ray that traced will increase. It causes the increasing of rendering time. To solve this problem, in this research, new methods are developed to make the rendering process of mesh object faster. The new methods are angle comparison and distance comparison. These methods are used to reduce the number of ray checking. The rays predicted will not intersect with the mesh, are not checked weather the ray intersects the mesh. With angle comparison, if using small angle to compare, the rendering process will be fast. This method has disadvantage, if the shape of each triangle is big, some triangles will be corrupted. If the angle to compare is bigger, mesh corruption can be avoided but the rendering time will be longer than without comparison. With distance comparison, the rendering time is less than without comparison, and no triangle will be corrupted.
Fabrication of 94Zr thin target for recoil distance doppler shift method of lifetime measurement
International Nuclear Information System (INIS)
Gupta, C.K.; Rohilla, Aman; Abhilash, S.R.; Kabiraj, D.; Singh, R.P.; Mehta, D.; Chamoli, S.K.
2014-01-01
A thin isotopic 94 Zr target of thickness 520μg/cm 2 has been prepared for recoil distance Doppler shift method (RDM) lifetime measurement by using an electron beam deposition method on tantalum backing of 3.5 mg/cm 2 thickness at Inter University Accelerator Center (IUAC), New Delhi. To meet the special requirement of smoothness of surface for RDM lifetime measurement and also to protect the outer layer of 94 Zr from peeling off, a very thin layer of gold has been evaporated on a 94 Zr target on a specially designed substrate holder. In all, 143 mg of 99.6% enriched 94 Zr target material was utilized for the fabrication of 94 Zr targets. The target has been successfully used in a recent RDM lifetime measurement experiment at IUAC
Fabrication of 94Zr thin target for recoil distance doppler shift method of lifetime measurement
Gupta, C. K.; Rohilla, Aman; Abhilash, S. R.; Kabiraj, D.; Singh, R. P.; Mehta, D.; Chamoli, S. K.
2014-11-01
A thin isotopic 94Zr target of thickness 520 μg /cm2 has been prepared for recoil distance Doppler shift method (RDM) lifetime measurement by using an electron beam deposition method on tantalum backing of 3.5 mg/cm2 thickness at Inter University Accelerator Center (IUAC), New Delhi. To meet the special requirement of smoothness of surface for RDM lifetime measurement and also to protect the outer layer of 94Zr from peeling off, a very thin layer of gold has been evaporated on a 94Zr target on a specially designed substrate holder. In all, 143 mg of 99.6% enriched 94Zr target material was utilized for the fabrication of 94Zr targets. The target has been successfully used in a recent RDM lifetime measurement experiment at IUAC.
Energy Technology Data Exchange (ETDEWEB)
Gupta, C.K.; Rohilla, Aman [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India); Abhilash, S.R.; Kabiraj, D.; Singh, R.P. [Inter University Accelerator Centre, Aruna Asaf Ali Marg, New Delhi 110067 (India); Mehta, D. [Department of Physics, Panjab University, Chandigarh 160014 (India); Chamoli, S.K., E-mail: skchamoli@physics.du.ac.in [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India)
2014-11-11
A thin isotopic {sup 94}Zr target of thickness 520μg/cm{sup 2} has been prepared for recoil distance Doppler shift method (RDM) lifetime measurement by using an electron beam deposition method on tantalum backing of 3.5 mg/cm{sup 2} thickness at Inter University Accelerator Center (IUAC), New Delhi. To meet the special requirement of smoothness of surface for RDM lifetime measurement and also to protect the outer layer of {sup 94}Zr from peeling off, a very thin layer of gold has been evaporated on a {sup 94}Zr target on a specially designed substrate holder. In all, 143 mg of 99.6% enriched {sup 94}Zr target material was utilized for the fabrication of {sup 94}Zr targets. The target has been successfully used in a recent RDM lifetime measurement experiment at IUAC.
Development of digital image correlation method to analyse crack ...
Indian Academy of Sciences (India)
samples were performed to verify the performance of the digital image correlation method. ... development cannot be measured accurately. ..... Mendelson A 1983 Plasticity: Theory and application (USA: Krieger Publishing company Malabar,.
Poirier, Jean-Nicolas; Cooley, Jeffrey R; Wessely, Michelle; Guebert, Gary M; Petrocco-Napuli, Kristina
2014-10-01
Objective : The purpose of this study was to evaluate the perceived effectiveness and learning potential of 3 Web-based educational methods in a postgraduate radiology setting. Methods : Three chiropractic radiology faculty from diverse geographic locations led mini-courses using asynchronous discussion boards, synchronous Web conferencing, and asynchronous voice-over case presentations formatted for Web viewing. At the conclusion of each course, participants filled out a 14-question survey (using a 5-point Likert scale) designed to evaluate the effectiveness of each method in achieving specified course objectives and goals and their satisfaction when considering the learning potential of each method. The mean, standard deviation, and percentage agreements were tabulated. Results : Twenty, 15, and 10 participants completed the discussion board, Web conferencing, and case presentation surveys, respectively. All educational methods demonstrated a high level of agreement regarding the course objective (total mean rating >4.1). The case presentations had the highest overall rating for achieving the course goals; however, all but one method still had total mean ratings >4.0 and overall agreement levels of 70%-100%. The strongest potential for interactive learning was found with Web conferencing and discussion boards, while case presentations rated very low in this regard. Conclusions : The perceived effectiveness in achieving the course objective and goals was high for each method. Residency-based distance education may be a beneficial adjunct to current methods of training, allowing for international collaboration. When considering all aspects tested, there does not appear to be a clear advantage to any one method. Utilizing various methods may be most appropriate.
Directory of Open Access Journals (Sweden)
Q. Zhou
2017-07-01
Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Mean protein evolutionary distance: a method for comparative protein evolution and its application.
Directory of Open Access Journals (Sweden)
Michael J Wise
Full Text Available Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED, measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV, dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza, and viroporins agnoprotein (polyomavirus, p7 (hepatitis C and VPU (HIV emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles, PB1/PB2 (influenza and VP1 (rotavirus, and internal serine proteases such as NS3 (dengue and hepatitis C virus emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.
Mean protein evolutionary distance: a method for comparative protein evolution and its application.
Wise, Michael J
2013-01-01
Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED), measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV), dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza), and viroporins agnoprotein (polyomavirus), p7 (hepatitis C) and VPU (HIV) emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles), PB1/PB2 (influenza) and VP1 (rotavirus), and internal serine proteases such as NS3 (dengue and hepatitis C virus) emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.
Székely, Gábor J.; Rizzo, Maria L.
2010-01-01
Distance correlation is a new class of multivariate dependence coefficients applicable to random vectors of arbitrary and not necessarily equal dimension. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but generalize and extend these classical bivariate measures of dependence. Distance correlation characterizes independence: it is zero if and only if the random vectors are independent. The notion of covariance with...
Representing distance, consuming distance
DEFF Research Database (Denmark)
Larsen, Gunvor Riber
Title: Representing Distance, Consuming Distance Abstract: Distance is a condition for corporeal and virtual mobilities, for desired and actual travel, but yet it has received relatively little attention as a theoretical entity in its own right. Understandings of and assumptions about distance...... are being consumed in the contemporary society, in the same way as places, media, cultures and status are being consumed (Urry 1995, Featherstone 2007). An exploration of distance and its representations through contemporary consumption theory could expose what role distance plays in forming...
Mohamed, Abdel-Baset A.
2018-04-01
In this paper, some non-classical correlations are investigated for bipartite partitions of two qubits trapped in two spatially separated cavities connected by an optical fiber. The results show that the trace distance discord and Bell's non-locality introduce other quantum correlations beyond the entanglement. Moreover, the correlation functions of the trace distance discord and the Bell's non-locality are very sensitive to the initial correlations, the coupling strengths, and the dissipation rates of the cavities. The fluctuations of the correlation functions between their initial values and gained (loss) values appear due to the unitary evolution of the system. These fluctuations depend on the chosen initial correlations between the two subsystems. The maximal violations of Bell's inequality occur when the logarithmic negativity and the trace distance discord reach certain values. It is shown that the robustness of the non-classical correlations, against the dissipation rates of the cavities, depends on the bipartite partitions reduced density matrices of the system, and is also greatly enhanced by choosing appropriate coupling strengths.
Amiroch, S.; Pradana, M. S.; Irawan, M. I.; Mukhlash, I.
2017-09-01
Multiple Alignment (MA) is a particularly important tool for studying the viral genome and determine the evolutionary process of the specific virus. Application of MA in the case of the spread of the Severe acute respiratory syndrome (SARS) epidemic is an interesting thing because this virus epidemic a few years ago spread so quickly that medical attention in many countries. Although there has been a lot of software to process multiple sequences, but the use of pairwise alignment to process MA is very important to consider. In previous research, the alignment between the sequences to process MA algorithm, Super Pairwise Alignment, but in this study used a dynamic programming algorithm Needleman wunchs simulated in Matlab. From the analysis of MA obtained and stable region and unstable which indicates the position where the mutation occurs, the system network topology that produced the phylogenetic tree of the SARS epidemic distance method, and system area networks mutation.
Srivastava, Madhur; Freed, Jack H
2017-11-16
Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.
Kremmer, Stephan; Keienburg, Marcus; Anastassiou, Gerasimos; Schallenberg, Maurice; Steuhl, Klaus-Peter; Selbach, J Michael
2012-01-01
To compare the performance of scanning laser topography (SLT) and scanning laser polarimetry (SLP) on the rim of the optic nerve head and its surrounding area and thereby to evaluate whether these imaging technologies are influenced by other factors beyond the thickness of the retinal nerve fiber layer (RNFL). A total of 154 eyes from 5 different groups were examined: young healthy subjects (YNorm), old healthy subjects (ONorm), patients with normal tension glaucoma (NTG), patients with open-angle glaucoma and early glaucomatous damage (OAGE) and patients with open-angle glaucoma and advanced glaucomatous damage (OAGA). SLT and SLP measurements were taken. Four concentric circles were superimposed on each of the images: the first one measuring at the rim of the optic nerve head (1.0 ONHD), the next measuring at 1.25 optic nerve head diameters (ONHD), at 1.5 ONHD and at 1.75 ONHD. The aligned images were analyzed using GDx/NFA software. Both methods showed peaks of RNFL thickness in the superior and inferior segments of the ONH. The maximum thickness, registered by the SLT device was at the ONH rim where the SLP device tended to measure the lowest values. SLT measurements at the ONH were influenced by other tissues besides the RNFL like blood vessels and glial tissues. SLT and SLP were most strongly correlated at distances of 1.25 and 1.5 ONHD. While both imaging technologies are valuable tools in detecting glaucoma, measurements at the ONH rim should be interpreted critically since both methods might provide misleading results. For the assessment of the retinal nerve fiber layer we would like to recommend for both imaging technologies, SLT and SLP, measurements in 1.25 and 1.5 ONHD distance of the rim of the optic nerve head.
International Nuclear Information System (INIS)
Alhossen, I; Bugarin, F; Segonds, S; Villeneuve-Faure, C; Baudoin, F
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC. (paper)
Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.
2017-01-01
Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.
Yap, Jonathan; Lim, Fang Yi; Gao, Fei; Teo, Ling Li; Lam, Carolyn Su Ping; Yeo, Khung Keong
2015-10-01
Functional status assessment is the cornerstone of heart failure management and trials. The New York Heart Association (NYHA) classification and 6-minute walk distance (6MWD) are commonly used tools; however, the correlation between them is not well understood. We hypothesised that the relationship between the NYHA classification and 6MWD might vary across studies. A systematic literature search was performed to identify all studies reporting both NYHA class and 6MWD. Two reviewers independently assessed study eligibility and extracted data. Thirty-seven studies involving 5678 patients were included. There was significant heterogeneity across studies in 6MWD within all NYHA classes: I (n = 16, Q = 934.2; P < 0.001), II (n = 25, Q = 1658.3; P < 0.001), III (n = 30, Q = 1020.1; P < 0.001), and IV (n = 6, Q = 335.5; P < 0.001). There was no significant difference in average 6MWD between NYHA I and II (420 m vs 393 m; P = 0.416). There was a significant difference in average 6MWD between NYHA II and III (393 m vs 321 m; P = 0.014) and III and IV (321 m vs 224 m; P = 0.027). This remained significant after adjusting for region of study, age, and sex. Although there is an inverse correlation between NYHA II-IV and 6MWD, there is significant heterogeneity across studies in 6MWD within each NYHA class and overlap in 6MWD between NYHA I and II. The NYHA classification performs well in more symptomatic patients (NYHA III/IV) but less so in asymptomatic/mildly symptomatic patients (NYHA I/II). Nonetheless, the NYHA classification is an easily applied first-line tool in everyday clinical practice, but its potential subjectivity should be considered when performing comparisons across studies. © 2015 Wiley Periodicals, Inc.
Atmospheric pollution measurement by optical cross correlation methods - A concept
Fisher, M. J.; Krause, F. R.
1971-01-01
Method combines standard spectroscopy with statistical cross correlation analysis of two narrow light beams for remote sensing to detect foreign matter of given particulate size and consistency. Method is applicable in studies of generation and motion of clouds, nuclear debris, ozone, and radiation belts.
Correlation between different methods of intra- abdominal pressure ...
African Journals Online (AJOL)
This study aimed to determine the correlation between transvesical ... circumstances may arise where this method is not viable and alternative methods ..... The polycompartment syndrome: A concise state-of-the- art review. ... hypertension in a mixed population of critically ill patients: A multiple-center epidemiological study.
Correlations between different methods of UO2 pellet density measurement
International Nuclear Information System (INIS)
Yanagisawa, Kazuaki
1977-07-01
Density of UO 2 pellets was measured by three different methods, i.e., geometrical, water-immersed and meta-xylene immersed and treated statistically, to find out the correlations between UO 2 pellets are of six kinds but with same specifications. The correlations are linear 1 : 1 for pellets of 95% theoretical densities and above, but such do not exist below the level and variated statistically due to interaction between open and close pores. (auth.)
Isotope correlations for safeguards surveillance and accountancy methods
International Nuclear Information System (INIS)
Persiani, P.J.; Kalimullah.
1982-01-01
Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The ICT allows the verification of: fabricator's uranium and plutonium content specifications, shipper/receiver differences between fabricator output and reactor input, reactor plant inventory changes, reprocessing batch specifications and shipper/receiver differences between reactor output and reprocessing plant input. The investigation indicates that there exist predictable functional relationships (i.e. correlations) between isotopic concentrations over a range of burnup. Several cross-correlations serve to establish the initial fuel assembly-averaged compositions. The selection of the more effective correlations will depend not only on the level of reliability of ICT for verification, but also on the capability, accuracy and difficulty of developing measurement methods. The propagation of measurement errors through the correlations have been examined to identify the sensitivity of the isotope correlations to measurement errors, and to establish criteria for measurement accuracy in the development and selection of measurement methods. 6 figures, 3 tables
International Nuclear Information System (INIS)
Ganguly, P.; Infante, C.; Siddiqi, S.A.; Sreedhar, K.
1990-05-01
The infra-red spectra of a large number of ternary Cu(II) oxides with at least a quasi square-planar coordination of oxygen around the copper ions have been studied. The frequency of the bands with the highest frequency, υ max , is found to correlate extremely well with the shortest Cu-O distance. υ max increases at an impressive rate of ∼ 20 cm -1 per .01 A when the Cu-O distance becomes less than 1.97 A, which is the Cu 2+ -O 2- distance in square-planar CuO 4 complexes as obtained from empirical ionic radii considerations. The marked sensitivity may be used as a ''titration'' procedure not only to assign bands but also to obtain diagnostic information about local coordination in compounds derived, for example, from the YBa 2 Cu 3 O 7-d structure such as LaCaBaCu 3 O 7-d . The only example where this correlation fails is in the two-layer non-superconducting oxides derived from La 2 (Ca,Sr)Cu 2 O 6 . the significance of this result is discussed. The marked dependence of frequency on the bond-distance is qualitatively examined in terms of an increased electron-phonon coupling to account for the observed tendency of the superconducting transition temperature to go through a maximum as the average basal plane Cu-O distance is decreased. (author). 52 refs, 6 figs
Isotope correlations for safeguards surveillance and accountancy methods
International Nuclear Information System (INIS)
Persiani, P.J.; Kalimullah.
1983-01-01
Isotope correlations corroborated by experiments, coupled with measurement methods for nuclear material in the fuel cycle have the potential as a safeguards surveillance and accountancy system. The US/DOE/OSS Isotope Correlations for Surveillance and Accountancy Methods (ICSAM) program has been structured into three phases: (1) the analytical development of Isotope Correlation Technique (ICT) for actual power reactor fuel cycles; (2) the development of a dedicated portable ICT computer system for in-field implementation, and (3) the experimental program for measurement of U, Pu isotopics in representative spent fuel-rods of the initial 3 or 4 burnup cycles of the Commonwealth Edison Zion -1 and -2 PWR power plants. Since any particular correlation could generate different curves depending upon the type and positioning of the fuel assembly, a 3-D reactor model and 2-group cross section depletion calculation for the first cycle of the ZION-2 was performed with each fuel assembly as a depletion block. It is found that for a given PWR all assemblies with a unique combination of enrichment zone and number of burnable poison rods (BPRs) generate one coincident curve. Some correlations are found to generate a single curve for assemblies of all enrichments and number of BPRs. The 8 axial segments of the 3-D calculation generate one coincident curve for each correlation. For some correlations the curve for the full assembly homogenized over core-height deviates from the curve for the 8 axial segments, and for other correlations coincides with the curve for the segments. The former behavior is primarily based on the transmutation lag between the end segment and the middle segments. The experimental implication is that the isotope correlations exhibiting this behavior can be determined by dissolving a full assembly but not by dissolving only an axial segment, or pellets
A numerical method to account for distance in a farmer's willingness to pay for land
Bakker, Martha M.; Heuvelink, Gerard B.M.; Vrugt, Jasper A.; Polman, Nico; Brookhuis, Bart; Kuhlman, Tom
2018-01-01
Land transactions between farmers are responsible for landscape changes in rural areas. The price a farmer is willing to pay (WTP) for vacant land depends on the distance of the parcel to the farmstead. Detailed quantitative knowledge of this WTP– distance relationship is of utmost importance for
3D Rigid Registration by Cylindrical Phase Correlation Method
Czech Academy of Sciences Publication Activity Database
Bican, Jakub; Flusser, Jan
2009-01-01
Roč. 30, č. 10 (2009), s. 914-921 ISSN 0167-8655 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/1593 Grant - others:GAUK(CZ) 48908 Institutional research plan: CEZ:AV0Z10750506 Keywords : 3D registration * correlation methods * Image registration Subject RIV: BD - Theory of Information Impact factor: 1.303, year: 2009 http://library.utia.cas.cz/separaty/2009/ZOI/bican-3d digit registration by cylindrical phase correlation method.pdf
Directory of Open Access Journals (Sweden)
Patrick D Schloss
Full Text Available Pyrosequencing of PCR-amplified fragments that target variable regions within the 16S rRNA gene has quickly become a powerful method for analyzing the membership and structure of microbial communities. This approach has revealed and introduced questions that were not fully appreciated by those carrying out traditional Sanger sequencing-based methods. These include the effects of alignment quality, the best method of calculating pairwise genetic distances for 16S rRNA genes, whether it is appropriate to filter variable regions, and how the choice of variable region relates to the genetic diversity observed in full-length sequences. I used a diverse collection of 13,501 high-quality full-length sequences to assess each of these questions. First, alignment quality had a significant impact on distance values and downstream analyses. Specifically, the greengenes alignment, which does a poor job of aligning variable regions, predicted higher genetic diversity, richness, and phylogenetic diversity than the SILVA and RDP-based alignments. Second, the effect of different gap treatments in determining pairwise genetic distances was strongly affected by the variation in sequence length for a region; however, the effect of different calculation methods was subtle when determining the sample's richness or phylogenetic diversity for a region. Third, applying a sequence mask to remove variable positions had a profound impact on genetic distances by muting the observed richness and phylogenetic diversity. Finally, the genetic distances calculated for each of the variable regions did a poor job of correlating with the full-length gene. Thus, while it is tempting to apply traditional cutoff levels derived for full-length sequences to these shorter sequences, it is not advisable. Analysis of beta-diversity metrics showed that each of these factors can have a significant impact on the comparison of community membership and structure. Taken together, these results
Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods
Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.
2018-06-01
We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.
Method of vacuum correlation functions: Results and prospects
International Nuclear Information System (INIS)
Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.
2006-01-01
Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s
Accurate measurements of E2 lifetimes using the coincidence recoil-distance method
Bhalla, R. K.; Poletti, A. R.
1984-05-01
Mean lives of four E2 transitions in the (2s, 1d) shell have been measured using the recoil-distance method (RDM), γ-rays de-exciting the level of interest were detected in coincidence with particles detected in an annular detector at a backward angle thereby reducing the background and producing a beam of recoiling nuclei of well-defined energy and recoil direction. Lifetimes measured were: 22Ne, 1.275 MeV level (2 + → 0 +), 5.16±0.13 ps; 26Mg, 3.588 MeV level (0 + → 2 +), 9.29±0.23 ps; 30Si, 3.788 MeV level (0 +→ 2 +), 12.00±0.70 ps; 38Ar, 3.377 MeV level (0 + → 2 +), 34.5±1.5 ps. The present measurements are compared to those of previous investigators. For the 22Ne level, averaged results from four different measurement techniques are compared and found to be in good agreement. The experimental results are compared to shell-model calculations.
Accurate measurements of E2 lifetimes using the coincidence recoil-distance method
International Nuclear Information System (INIS)
Bhalla, R.K.; Poletti, A.R.
1984-01-01
Mean lives of four E2 transitions in the (2s, 1d) shell have been measured using the recoil-distance method (RDM). γ-rays de-exciting the level of interest were detected in coincidence with particles detected in an annular detector at a backward angle thereby reducing the background and producing a beam of recoiling nuclei of well-defined energy and recoil direction. Lifetimes measured were: 22 Ne, 1.275 MeV level (2 + -> 0 + ), 5.16 +- 0.13 ps; 26 Mg, 3.588 MeV level (0 + -> 2 + ), 9.29 +- 0.23 ps; 30 Si, 3.788 MeV level (0 + -> 2 + ), 12.00 +- 0.70 ps; 38 Ar, 3.377 MeV level (0 + -> 2 + ), 34.5 +- 1.5 ps. The present measurements are compared to those of previous investigators. For the 22 Ne level, averaged results from four different measurement techniques are compared and found to be in good agreement. The experimental results are compared to shell-model calculations. (orig.)
Improvement of correlated sampling Monte Carlo methods for reactivity calculations
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Asaoka, Takumi
1978-01-01
Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)
Neutron Star masses from the Field Correlator Method Equation of State
Directory of Open Access Journals (Sweden)
Zappalà D.
2014-04-01
Full Text Available We analyse the hadron-quark phase transition in neutron stars by confronting the hadronic Equation of State (EoS obtained according to the microscopic Brueckner-Hartree-Fock many body theory, with the quark matter EoS derived within the Field Correlator Method. In particular, the latter EoS is only parametrized in terms of the gluon condensate and the large distance quark-antiquark potential, so that the comparison of the results of this analysis with the most recent measurements of heavy neutron star masses provides some physical constraints on these two parameters.
Fast electronic structure methods for strongly correlated molecular systems
International Nuclear Information System (INIS)
Head-Gordon, Martin; Beran, Gregory J O; Sodt, Alex; Jung, Yousung
2005-01-01
A short review is given of newly developed fast electronic structure methods that are designed to treat molecular systems with strong electron correlations, such as diradicaloid molecules, for which standard electronic structure methods such as density functional theory are inadequate. These new local correlation methods are based on coupled cluster theory within a perfect pairing active space, containing either a linear or quadratic number of pair correlation amplitudes, to yield the perfect pairing (PP) and imperfect pairing (IP) models. This reduces the scaling of the coupled cluster iterations to no worse than cubic, relative to the sixth power dependence of the usual (untruncated) coupled cluster doubles model. A second order perturbation correction, PP(2), to treat the neglected (weaker) correlations is formulated for the PP model. To ensure minimal prefactors, in addition to favorable size-scaling, highly efficient implementations of PP, IP and PP(2) have been completed, using auxiliary basis expansions. This yields speedups of almost an order of magnitude over the best alternatives using 4-center 2-electron integrals. A short discussion of the scope of accessible chemical applications is given
Tracing Method with Intra and Inter Protocols Correlation
Directory of Open Access Journals (Sweden)
Marin Mangri
2009-05-01
Full Text Available MEGACO or H.248 is a protocol enabling acentralized Softswitch (or MGC to control MGsbetween Voice over Packet (VoP networks andtraditional ones. To analyze much deeper the realimplementations it is useful to use a tracing systemwith intra and inter protocols correlation. For thisreason in the case of MEGACO-H.248 it is necessaryto find the appropriate method of correlation with allprotocols involved. Starting from Rel4 a separation ofCP (Control Plane and UP (User Plane managementwithin the networks appears. MEGACO protocol playsan important role in the migration to the new releasesor from monolithic platform to a network withdistributed components.
Correlation expansion: a powerful alternative multiple scattering calculation method
International Nuclear Information System (INIS)
Zhao Haifeng; Wu Ziyu; Sebilleau, Didier
2008-01-01
We introduce a powerful alternative expansion method to perform multiple scattering calculations. In contrast to standard MS series expansion, where the scattering contributions are grouped in terms of scattering order and may diverge in the low energy region, this expansion, called correlation expansion, partitions the scattering process into contributions from different small atom groups and converges at all energies. It converges faster than MS series expansion when the latter is convergent. Furthermore, it takes less memory than the full MS method so it can be used in the near edge region without any divergence problem, even for large clusters. The correlation expansion framework we derive here is very general and can serve to calculate all the elements of the scattering path operator matrix. Photoelectron diffraction calculations in a cluster containing 23 atoms are presented to test the method and compare it to full MS and standard MS series expansion
Local Field Response Method Phenomenologically Introducing Spin Correlations
Tomaru, Tatsuya
2018-03-01
The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.
Total focusing method with correlation processing of antenna array signals
Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.
2018-03-01
The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.
Recoil Distance Method lifetime measurements via gamma-ray and charged-particle spectroscopy at NSCL
Voss, Philip Jonathan
The Recoil Distance Method (RDM) is a well-established technique for measuring lifetimes of electromagnetic transitions. Transition matrix elements derived from the lifetimes provide valuable insight into nuclear structure. Recent RDM investigations at NSCL present a powerful new model-independent tool for the spectroscopy of nuclei with extreme proton-to-neutron ratios that exhibit surprising behavior. Neutron-rich 18C is one such example, where a small B(E2; 2+1 → 0+gs) represented a dramatic shift from the expected inverse relationship between the B(E2) and 2+1 excitation energy. To shed light on the nature of this quadrupole excitation, the RDM lifetime technique was applied with the Koln/NSCL plunger. States in 18C were populated by the one-proton knockout reaction of a 19N secondary beam. De-excitation gamma rays were detected with the Segmented Germanium Array in coincidence with reaction residues at the focal plane of the S800 Magnetic Spectrometer. The deduced B(E2) and excitation energy were both well described by ab initio no-core shell model calculations. In addition, a novel extension of RDM lifetime measurements via charged-particle spectroscopy of exotic proton emitters has been investigated. Substituting the reaction residue degrader of the Koln/NSCL plunger with a thin silicon detector permits the study of short-lived nuclei beyond the proton dripline. A proof of concept measurement of the mean lifetime of the two-proton emitter 19Mg was conducted. The results indicated a sub-picosecond lifetime, one order of magnitude smaller than the published results, and validate this new technique for lifetime measurements of charged-particle emitters.
Correlation of energy balance method to dynamic pipe rupture analysis
International Nuclear Information System (INIS)
Kuo, H.H.; Durkee, M.
1983-01-01
When using an energy balance approach in the design of pipe rupture restraints for nuclear power plants, the NRC specifies in its Standard Review Plan 3.6.2 that the input energy to the system must be multiplied by a factor of 1.1 unless a lower value can be justified. Since the energy balance method is already quite conservative, an across-the-board use of 1.1 to amplify the energy input appears unneccessary. The paper's purpose is to show that this 'correlation factor' could be substantially less than unity if certain design parameters are met. In this paper, result of nonlinear dynamic analyses were compared to the results of the corresponding analyses based on the energy balance method which assumes constant blowdown forces and rigid plastic material properties. The appropriate correlation factors required to match the energy balance results with the dynamic analyses results were correlated to design parameters such as restraint location from the break, yield strength of the energy absorbing component, and the restraint gap. It is shown that the correlation factor is related to a single nondimensional design parameter and can be limited to a value below unity if appropriate design parameters are chosen. It is also shown that the deformation of the restraints can be related to dimensionless system parameters. This, therefore, allows the maximum restraint deformation to be evaluated directly for design purposes. (orig.)
Dériaz, Olivier; Najafi, Bijan; Ballabeni, Pierluigi; Crettenand, Antoinette; Gobelet, Charles; Aminian, Kamiar; Rizzoli, René; Gremion, Gerald
2010-01-01
The beneficial effect of physical exercise on bone mineral density (BMD) is at least partly explained by the forces exerted directly on the bones. Male runners present generally higher BMD than sedentary individuals. We postulated that the proximal tibia BMD is related to the running distance, as well as to the magnitude of the shocks (while running) in male runners. A prospective study (three yearly measurements) included 81 healthy male subjects: 16 sedentary lean subjects, and 3 groups of ...
The Distance Standard Deviation
Edelmann, Dominic; Richards, Donald; Vogel, Daniel
2017-01-01
The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...
Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.
2017-11-01
We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high
Rinzler, Charles C.; Gray, William C.; Faircloth, Brian O.; Zediker, Mark S.
2016-02-23
A monitoring and detection system for use on high power laser systems, long distance high power laser systems and tools for performing high power laser operations. In particular, the monitoring and detection systems provide break detection and continuity protection for performing high power laser operations on, and in, remote and difficult to access locations.
Using Future Research Methods in Analysing Policies Relating to Open Distance Education in Africa
Makoe, Mpine Elizabeth
2018-01-01
Many African countries have developed policies to reform their education system in order to widen participation in higher education. To achieve this, open, online and distance education based models have been advocated as the most viable delivery tools in expanding access to higher education. However, the policy analysis of Kenya, Rwanda and…
Distance Education Teaching Methods and Student Responses in the Animal Sciences
Bing, Jada Quinome
2012-01-01
The overall objective of this dissertation is to observe whether or not an Anatomy & Physiology Distance Education (DistEd) course offered in the Animal Science Department will prove to be valuable in the learning process for students. Study 1 was conducted to determine whether gross anatomy of animals could be taught effectively at the…
Gerber, Gwendolyn L.
The diagnosis and evaluation of families applying for family therapy is necessary to identify problems in their relationships that can be effectively integrated into treatment. In the Family Distance Doll Placement technique, a diagnostic tool for assessing patterns of closeness and distance within the family, parents are asked to make up stories…
Petascale Many Body Methods for Complex Correlated Systems
Pruschke, Thomas
2012-02-01
Correlated systems constitute an important class of materials in modern condensed matter physics. Correlation among electrons are at the heart of all ordering phenomena and many intriguing novel aspects, such as quantum phase transitions or topological insulators, observed in a variety of compounds. Yet, theoretically describing these phenomena is still a formidable task, even if one restricts the models used to the smallest possible set of degrees of freedom. Here, modern computer architectures play an essential role, and the joint effort to devise efficient algorithms and implement them on state-of-the art hardware has become an extremely active field in condensed-matter research. To tackle this task single-handed is quite obviously not possible. The NSF-OISE funded PIRE collaboration ``Graduate Education and Research in Petascale Many Body Methods for Complex Correlated Systems'' is a successful initiative to bring together leading experts around the world to form a virtual international organization for addressing these emerging challenges and educate the next generation of computational condensed matter physicists. The collaboration includes research groups developing novel theoretical tools to reliably and systematically study correlated solids, experts in efficient computational algorithms needed to solve the emerging equations, and those able to use modern heterogeneous computer architectures to make then working tools for the growing community.
Partial correlation analysis method in ultrarelativistic heavy-ion collisions
Olszewski, Adam; Broniowski, Wojciech
2017-11-01
We argue that statistical data analysis of two-particle longitudinal correlations in ultrarelativistic heavy-ion collisions may be efficiently carried out with the technique of partial covariance. In this method, the spurious event-by-event fluctuations due to imprecise centrality determination are eliminated via projecting out the component of the covariance influenced by the centrality fluctuations. We bring up the relationship of the partial covariance to the conditional covariance. Importantly, in the superposition approach, where hadrons are produced independently from a collection of sources, the framework allows us to impose centrality constraints on the number of sources rather than hadrons, that way unfolding of the trivial fluctuations from statistical hadronization and focusing better on the initial-state physics. We show, using simulated data from hydrodynamics followed with statistical hadronization, that the technique is practical and very simple to use, giving insight into the correlations generated in the initial stage. We also discuss the issues related to separation of the short- and long-range components of the correlation functions and show that in our example the short-range component from the resonance decays is largely reduced by considering pions of the same sign. We demonstrate the method explicitly on the cases where centrality is determined with a single central control bin or with two peripheral control bins.
Directory of Open Access Journals (Sweden)
Javad Nematian
2015-04-01
Full Text Available Vertex and p-center problems are two well-known types of the center problem. In this paper, a p-center problem with uncertain demand-weighted distance will be introduced in which the demands are considered as fuzzy random variables (FRVs and the objective of the problem is to minimize the maximum distance between a node and its nearest facility. Then, by introducing new methods, the proposed problem is converted to deterministic integer programming (IP problems where these methods will be obtained through the implementation of the possibility theory and fuzzy random chance-constrained programming (FRCCP. Finally, the proposed methods are applied for locating bicycle stations in the city of Tabriz in Iran as a real case study. The computational results of our study show that these methods can be implemented for the center problem with uncertain frameworks.
An improved method for estimating the frequency correlation function
Chelli, Ali; Pä tzold, Matthias
2012-01-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
An improved method for estimating the frequency correlation function
Chelli, Ali
2012-04-01
For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.
Methods for converging correlation energies within the dielectric matrix formalism
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Linear extrapolation distance for a black cylindrical control rod with the pulsed neutron method
International Nuclear Information System (INIS)
Loewenhielm, G.
1978-03-01
The objective of this experiment was to measure the linear extrapolation distance for a central black cylindrical control rod in a cylindrical water moderator. The radius for both the control rod and the moderator was varied. The pulsed neutron technique was used and the decay constant was measured for both a homogeneous and a heterogeneous system. From the difference in the decay constants the extrapolation distance could be calculated. The conclusion is that within experimental error it is safe to use the approximate formula given by Pellaud or the more exact one given by Kavenoky. We can also conclude that linear anisotropic scattering is accounted for in a correct way in the approximate formula given by Pellaud and Prinja and Williams
Directory of Open Access Journals (Sweden)
DOGARU Gabriela
2014-02-01
Full Text Available Experts predicted as early as 10 years ago that starting with 2010, at least 15% of health care services worldwide would be provided from a distance, using telemedicine (1. The expansion of the fields of medicine at a distance has been possible due to the progress made in the area of communication and information technologies. These can increase the capacity of health care services, improve service provision and allow people to better take care of their health. In a society undergoing a significant transformation process at the beginning of the third millennium, the use of modern communication means for the improvement of the quality of life of disadvantaged social groups is a necessity.
a Task-Oriented Disaster Information Correlation Method
Linyao, Q.; Zhiqiang, D.; Qing, Z.
2015-07-01
With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.
Energy Technology Data Exchange (ETDEWEB)
Doncel, M. [Universidad de Salamanca, Laboratorio de Radiaciones Ionizantes, Salamanca (Spain); Royal Institute of Technology, Department of Physics, Stockholm (Sweden); University of Liverpool, Department of Physics, Oliver Lodge Laboratory, Liverpool (United Kingdom); Gadea, A. [CSIC-University of Valencia, Istituto de Fisica Corpuscular, Valencia (Spain); Valiente-Dobon, J.J. [INFN, Laboratori Nazionali di Legnaro, Legnaro (Italy); Quintana, B. [Universidad de Salamanca, Laboratorio de Radiaciones Ionizantes, Salamanca (Spain); Modamio, V. [INFN, Laboratori Nazionali di Legnaro, Legnaro (Italy); University of Oslo, Oslo (Norway); Mengoni, D. [Dipartimento di Fisica e Astronomia, Universita di Padova, Padova (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Padova, Padova (Italy); Moeller, O.; Pietralla, N. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); Dewald, A. [Institut fuer Kernphysik, Universitaet Koeln (Germany)
2017-10-15
The current work presents the determination of lifetimes of nuclear excited states using the Recoil Distance Doppler Shift Method, in combination with spectrometers for ion identification, normalizing the intensity of the peaks by the ions detected in the spectrometer as a valid technique that produces results comparable to the ones obtained by the conventional shifted-to-unshifted peak ratio method. The technique has been validated using data measured with the γ-ray array AGATA, the PRISMA spectrometer and the Cologne plunger setup. In this paper a test performed with the AGATA-PRISMA setup at LNL and the advantages of this new approach with respect to the conventional Recoil Distance Doppler Shift Method are discussed. (orig.)
Monte Carlo burnup codes acceleration using the correlated sampling method
International Nuclear Information System (INIS)
Dieudonne, C.
2013-01-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr
Fast methods for spatially correlated multilevel functional data
Staicu, A.-M.
2010-01-19
We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online.
Generalized Bregman distances and convergence rates for non-convex regularization methods
International Nuclear Information System (INIS)
Grasmair, Markus
2010-01-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ 1/p holds, if the regularization term has a slightly faster growth at zero than |t| p
Research on Signature Verification Method Based on Discrete Fréchet Distance
Fang, J. L.; Wu, W.
2018-05-01
This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.
Diaz-Balteiro, L; Belavenutti, P; Ezquerro, M; González-Pachón, J; Ribeiro Nobre, S; Romero, C
2018-05-15
There is an important body of literature using multi-criteria distance function methods for the aggregation of a battery of sustainability indicators in order to obtain a composite index. This index is considered to be a proxy of the sustainability goodness of a natural system. Although this approach has been profusely used in the literature, it is not exempt from difficulties and potential pitfalls. Thus, in this paper, a significant number of critical issues have been identified showing different procedures capable of avoiding, or at least of mitigating, the inherent potential pitfalls associated with each one. The recommendations made in the paper could increase the theoretical soundness of the multi-criteria distance function methods when this type of approach is applied in the sustainability field, thus increasing the accuracy and realism of the sustainability measurements obtained. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effective detection method for falls according to the distance between two tri-axial accelerometers
Kim, Jae-Hyung; Park, Geun-Chul; Kim, Soo-Hong; Kim, Soo-Sung; Lee, Hae-Rim; Jeon, Gye-Rok
2016-04-01
Falls and fall-related injuries are a significant problem in the elderly population. A number of different approaches for detecting falls and activities of daily living (ADLs) have been conducted in recent years. However, distinguishing between real falls and certain fall-like ADL is often difficult. The aim of this study is to discriminate falls from fall-like ADLs such as jogging, jumping, and jumping down. The distance between two tri-axial accelerometers attached to the abdomen and the sternum was increased from 10 to 30 cm in 10-cm intervals. Experiments for falls and ADLs were performed to investigate the feasibility of the detection system for falls developed in this study. When the distances between the two tri-axial electrometers were 20 and 30 cm, fall-like ADLs were effectively distinguished from falls. The thresholds for three parameters — SVM, Diff Z, and Sum_diff_Z — were set; falls could be distinguished from ADL action sequences when the SVM value was larger than 4 g (TH1), the Diff_Z parameter was larger than 1.25 g (TH2), and the Sum_diff_Z parameter was larger than 15 m/s (TH3). In particular, when the SVM, Diff_Z, and Sum_diff_Z parameter were sequentially applied to thresholds (TH1, TH2, and TH3), fall-like ADL action sequences were accurately discriminated from falls.
International Nuclear Information System (INIS)
Keil, Fabian
2014-01-01
Investigating the structure of human cerebral white matter is gaining interest in the neurological as well as in the neuroscientific community. It has been demonstrated in many studies that white matter is a very dynamic structure, rather than a static construct which does not change for a lifetime. That means, structural changes within white matter can be observed even on short timescales, e.g. in the course of normal ageing, neurodegenerative diseases or even during learning processes. To investigate these changes, one method of choice is the texture analysis of images obtained from white matter. In this regard, MRI plays a distinguished role as it provides a completely non-invasive way of acquiring in vivo images of human white matter. This thesis adapted a statistical texture analysis method, known as variography, to quantify the spatial correlation of human cerebral white matter based on MR images. This method, originally introduced in geoscience, relies on the idea of spatial correlation in geological phenomena: in naturally grown structures near things are correlated stronger to each other than distant things. This work reveals that the geological principle of spatial correlation can be applied to MR images of human cerebral white matter and proves that variography is an adequate method to quantify alterations therein. Since the process of MRI data acquisition is completely different to the measuring process used to quantify geological phenomena, the variographic analysis had to be adapted carefully to MR methods in order to provide a correctly working methodology. Therefore, theoretical considerations were evaluated with numerical samples in a first, and validated with real measurements in a second step. It was shown that MR variography facilitates to reduce the information stored in the texture of a white matter image to a few highly significant parameters, thereby quantifying heterogeneity and spatial correlation distance with an accuracy better than 5
Energy Technology Data Exchange (ETDEWEB)
Keil, Fabian
2014-03-20
Investigating the structure of human cerebral white matter is gaining interest in the neurological as well as in the neuroscientific community. It has been demonstrated in many studies that white matter is a very dynamic structure, rather than a static construct which does not change for a lifetime. That means, structural changes within white matter can be observed even on short timescales, e.g. in the course of normal ageing, neurodegenerative diseases or even during learning processes. To investigate these changes, one method of choice is the texture analysis of images obtained from white matter. In this regard, MRI plays a distinguished role as it provides a completely non-invasive way of acquiring in vivo images of human white matter. This thesis adapted a statistical texture analysis method, known as variography, to quantify the spatial correlation of human cerebral white matter based on MR images. This method, originally introduced in geoscience, relies on the idea of spatial correlation in geological phenomena: in naturally grown structures near things are correlated stronger to each other than distant things. This work reveals that the geological principle of spatial correlation can be applied to MR images of human cerebral white matter and proves that variography is an adequate method to quantify alterations therein. Since the process of MRI data acquisition is completely different to the measuring process used to quantify geological phenomena, the variographic analysis had to be adapted carefully to MR methods in order to provide a correctly working methodology. Therefore, theoretical considerations were evaluated with numerical samples in a first, and validated with real measurements in a second step. It was shown that MR variography facilitates to reduce the information stored in the texture of a white matter image to a few highly significant parameters, thereby quantifying heterogeneity and spatial correlation distance with an accuracy better than 5
Perpendicular distance sampling: an alternative method for sampling downed coarse woody debris
Michael S. Williams; Jeffrey H. Gove
2003-01-01
Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...
A definition of distance and method of making space-time measurements
International Nuclear Information System (INIS)
Brisson, D.W.
1980-01-01
The paper explores an extended definition of the absolute value of a complex number and thus a new definition of distance. This new definition, called the nabsolute value of a complex number, is (Z) where Z = (a or ia) + (b or ib), so that (Z) is equivalent to [α 2 + β 2 ]sup(1/2), and α = a or ia, β = b or ib. This is shown on a superimposed X,Y plot and iX,iY plot so that four dimensions are represented in a plane. The application of this scheme to space-time measurement is then identified with the Minkowski Plane which has identical properties with the complex plane, with this new interpretation of the absolute value of a complex number. (Auth.)
Wang, Xuan-yu; Hu, Rui; Wang, Rui-xin
2015-10-01
A simple method has been set up to quickly test the emissivity with an infrared thermal imaging system within a small distance according to the theory of measuring temperature by infrared system, which is based on the Planck radiation law and Lambert-beer law. The object's temperature is promoted and held on by a heater while a temperature difference has been formed between the target and environment. The emissivity of human skin, galvanized iron plate, black rubber and liquid water has been tested under the condition that the emissivity is set in 1.0 and the testing distance is 1m. According to the invariance of human's body temperature, a testing curve is established to describe that the thermal imaging temperatures various with the emissivity which is set in from 0.9 to 1.0. As a result, the method has been verified. The testing results show that the emissivity of human skin is 0.95. The emissivity of galvanized iron plate, black rubber and liquid water decreases with the increase of object's temperature. The emissivity of galvanized iron plate is far smaller than the one of human skin, black rubber or water. The emissivity of water slowly linearly decreases with the increase of its temperature. By the study, within a small distance and clean atmosphere, the infrared emissivity of objects may be expediently tested with an infrared thermal imaging system according to the method, which is promoting the object's temperature to make it different from the environment temperature, then simultaneously measures the environmental temperature, the real temperature and thermal imaging temperature of the object when the emissivity is set in 1.0 and the testing distance is 1.0m.
Transforming han: a correlational method for psychology and religion.
Oh, Whachul
2015-06-01
Han is a destructive feeling in Korea. Although Korea accomplished significant exterior growth, Korean society is still experiencing the dark aspects of transforming han as evidenced by having the highest suicide rate in Asia. Some reasons for this may be the fragmentation between North and South Korea. If we can transform han then it can become constructive. I was challenged to think of possibilities for transforming han internally; this brings me to the correlational method through psychological and religious interpretation. This study is to challenge and encourage many han-ridden people in Korean society. Through the psychological and religious understanding of han, people suffering can positively transform their han. They can relate to han more subjectively, and this means the han-ridden psyche has an innate sacredness of potential to transform.
Dornacher, Daniel; Reichel, Heiko; Lippacher, Sabine
2014-10-01
Excessive tibial tuberosity-trochlear groove distance (TT-TG) is considered as one of the major risk factors in patellofemoral instability (PFI). TT-TG characterises the lateralisation of the tibial tuberosity and the medialisation of the trochlear groove in the case of trochlear dysplasia. The aim of this study was to assess the inter- and intraobserver reliability of the measurement of TT-TG dependent on the grade of trochlear dysplasia. Magnetic resonance imaging (MRI) scans of 99 consecutive knee joints were analysed retrospectively. Hereof, 61 knee joints presented with a history of PFI and 38 had no symptoms of PFI. After synopsis of the axial MRI scans with true lateral radiographs of the knee, the 61 knees presenting with PFI were assessed in terms of trochlear dysplasia. The knees were distributed according to the four-type classification system described by Dejour. Regarding interobserver correlation for the measurements of TT-TG in trochlear dysplasia, we found r=0.89 (type A), r=0.90 (type B), r=0.74 (type C) and 0.62 (type D) for Pearson's correlation coefficient. Regarding intraobserver correlation, we calculated r=0.89 (type A), r=0.91 (type B), r=0.77 (type C) and r=0.71 (type D), respectively. Pearson's correlation coefficient for the measurement of TT-TG in normal knees resulted in r=0.87 for interobserver correlation and r=0.90 for intraobserver correlation. Decreasing inter- and intraobserver correlation for the measurement of TT-TG with increasing severity of trochlear dysplasia was detected. In our opinion, the measurement of TT-TG is of significance in low-grade trochlear dysplasia. The final decision to perform a distal realignment procedure based on a pathological TT-TG in the presence of high-grade trochlear dysplasia should be reassessed properly. Retrospective study, Level II.
Carp, Stefan A; Farzam, Parisa; Redes, Norin; Hueber, Dennis M; Franceschini, Maria Angela
2017-09-01
Frequency domain near infrared spectroscopy (FD-NIRS) and diffuse correlation spectroscopy (DCS) have emerged as synergistic techniques for the non-invasive assessment of tissue health. Combining FD-NIRS oximetry with DCS measures of blood flow, the tissue oxygen metabolic rate can be quantified, a parameter more closely linked to underlying physiology and pathology than either NIRS or DCS estimates alone. Here we describe the first commercially available integrated instrument, called the "MetaOx", designed to enable simultaneous FD-NIRS and DCS measurements at rates of 10 + Hz, and offering real-time data evaluation. We show simultaneously acquired characterization data demonstrating performance equivalent to individual devices and sample in vivo measurements of pulsation resolved blood flow, forearm occlusion hemodynamic changes and muscle oxygen metabolic rate monitoring during stationary bike exercise.
Distance Between the Malleoli and the GroundA New Clinical Method to Measure Leg-Length Discrepancy.
Aguilar, Estela Gomez; Domínguez, Águeda Gómez; Peña-Algaba, Carolina; Castillo-López, José M
2017-03-01
The aim of this work is to introduce a useful method for the clinical diagnosis of leg-length inequality: distance between the malleoli and the ground (DMG). A transversal observational study was performed on 17 patients with leg-length discrepancy. Leg-length inequality was determined with different clinical methods: with a tape measure in a supine position from the anterior superior iliac spine (ASIS) to the internal and external malleoli, as the difference between the iliac crests when standing (pelvimeter), and as asymmetry between ASISs (PALpation Meter [PALM]; A&D Medical Products Healthcare, San Jose, California). The Foot Posture Index (FPI) and the navicular drop test were also used. The DMG with Perthes rule (perpendicular to the foot when standing), the distance between the internal malleolus and the ground (DIMG), and the distance between the external malleolus and the ground were designed by the authors. The DIMG is directly related to the traditional ASIS-external malleolus measurement (P = .003), the FPI (P = .010), and the navicular drop test (P DMG) is useful for diagnosing leg-length discrepancy and is related to the ASIS-external malleolus measurement. The DIMG is significantly inversely proportional to the degree of pronation according to the FPI. Conversely, determination of leg-length discrepancy with a tape measure from the ASIS to the malleoli cannot be performed interchangeably at the level of the internal or external malleolus.
A hybrid measure-correlate-predict method for long-term wind condition assessment
International Nuclear Information System (INIS)
Zhang, Jie; Chowdhury, Souma; Messac, Achille; Hodge, Bri-Mathias
2014-01-01
Highlights: • A hybrid measure-correlate-predict (MCP) methodology with greater accuracy is developed. • Three sets of performance metrics are proposed to evaluate the hybrid MCP method. • Both wind speed and direction are considered in the hybrid MCP method. • The best combination of MCP algorithms is determined. • The developed hybrid MCP method is uniquely helpful for long-term wind resource assessment. - Abstract: This paper develops a hybrid measure-correlate-predict (MCP) strategy to assess long-term wind resource variations at a farm site. The hybrid MCP method uses recorded data from multiple reference stations to estimate long-term wind conditions at a target wind plant site with greater accuracy than is possible with data from a single reference station. The weight of each reference station in the hybrid strategy is determined by the (i) distance and (ii) elevation differences between the target farm site and each reference station. In this case, the wind data is divided into sectors according to the wind direction, and the MCP strategy is implemented for each wind direction sector separately. The applicability of the proposed hybrid strategy is investigated using five MCP methods: (i) the linear regression; (ii) the variance ratio; (iii) the Weibull scale; (iv) the artificial neural networks; and (v) the support vector regression. To implement the hybrid MCP methodology, we use hourly averaged wind data recorded at five stations in the state of Minnesota between 07-01-1996 and 06-30-2004. Three sets of performance metrics are used to evaluate the hybrid MCP method. The first set of metrics analyze the statistical performance, including the mean wind speed, wind speed variance, root mean square error, and mean absolute error. The second set of metrics evaluate the distribution of long-term wind speed; to this end, the Weibull distribution and the Multivariate and Multimodal Wind Distribution models are adopted. The third set of metrics analyze
Directory of Open Access Journals (Sweden)
Olga Miščenko
2014-12-01
Full Text Available The purpose of the article is to analyze the experience of the first work years of teaching the students, who study by distance, to compare other authors’ experience and to examine the advantages of Moodle virtual learning environment (VLE, searching for new applications of it. The relevance of e-learning is noted. It is affirmed that metacognitive learning strategies are typical for learning foreign languages in virtual environment. It is said that the Internet is a tool that ensures studies by distance. It is said that raising the qualification and learning by distance allows a responsible employee to improve foreign language skills while lifelong learning. VLE adaptability for teaching and studying English is being discussed. It is stated that the Internet conditions all types of methods in the virtual environment, application, and its existence expands and deepens the learning approach. In the paper it is claimed that the Moodle VLE function is to improve the learning process to ensure a high level of expertise and the objectivity of assessment. Studying in conventional way and in the virtual environment are briefly compared. Moodle virtual learning environment application objectives to learning outcomes, emphasizing the importance of the traditional teaching methods, the student’s responsibility to call attention to the learning process and system characteristics are defined. It is noted that learning in the virtual environment is based on the principles of epistemology, therefore the Moodle system meets the didactic tasks. The virtual learning environment possibilities ensure a very good feedback and increase students’ motivation, and, consequently, that provides better knowledge. It is emphasized that while teaching by distance, the teacher’s responsibility, his role in the development of educational material and the course tasks have increased. Some specific cases for various forms of studies and exercises to perform in the
National Research Council Canada - National Science Library
Braddock, Joseph
1997-01-01
A study reviewing the existing Army Distance Learning Plan (ADLP) and current Distance Learning practices, with a focus on the Army's training and educational challenges and the benefits of applying Distance Learning techniques...
Directory of Open Access Journals (Sweden)
T. K. Pinegina
2001-01-01
Full Text Available Along the eastern coast of Kamchatka, at a number of localities, we have identified and attempted to assign ages to deposits of both historic and prehistoric (paleo- tsunamis. These deposits are dated and correlated using tephrochronology from Holocene marker tephra and local volcanic ash layers. Because the historical record of earthquakes and tsunamis on Kamchatka is so short, these investigations can make important contributions to evaluating tsunami hazards. Moreover, because even the historical record is spotty, our work helps add to and evaluate tsunami catalogues for Kamchatka. Furthermore, tsunami deposits provide a proxy record for large earthquakes and thus are important paleoseismological tools. The combined, preserved record of tsunami deposits and of numerous marker tephra on Kamchatka offers an unprecedented opportunity to study tsunami frequency. Using combined stratigraphic sections, we can examine both the average frequency of events for each locality, and also changes in frequency through time. Moreover, using key marker tephra as time lines, we can compare tsunami frequency and intensity records along the Kamchatka subduction zone. Preliminary results suggest real variations in frequency on a millennial time scale, with the period from about 0 to 1000 A.D. being particularly active at some localities.
Directory of Open Access Journals (Sweden)
Germana Jayme Borges
2015-01-01
Full Text Available The objective of the present study was to assess cone-beam computed tomography (CBCT as a diagnostic method for determination of gingival thickness (GT and distance between gingival margin and vestibular (GMBC-V and interproximal bone crests (GMBC-I. GT and GMBC-V were measured in 348 teeth and GMBC-I was measured in 377 tooth regions of 29 patients with gummy smile. GT was assessed using transgingival probing (TP, ultrasound (US, and CBCT, whereas GMBC-V and GMBC-I were assessed by transsurgical clinical evaluation (TCE and CBCT. Statistical analyses used independent t-test, Pearson’s correlation coefficient, and simple linear regression. Difference was observed for GT: between TP, CBCT, and US considering all teeth; between TP and CBCT and between TP and US in incisors and canines; between TP and US in premolars and first molars. TP presented the highest means for GT. Positive correlation and linear regression were observed between TP and CBCT, TP and US, and CBCT and US. Difference was observed for GMBC-V and GMBC-I using TCE and CBCT, considering all teeth. Correlation and linear regression results were significant for GMBC-V and GMBC-I in incisors, canines, and premolars. CBCT is an effective diagnostic method to visualize and measure GT, GMBC-V, and GMBC-I.
International Nuclear Information System (INIS)
Zhang Weigang
2000-01-01
Based on the concept of correlative degree, a new method of high-order collective-flow measurement is constructed, with which azimuthal correlations, correlations of final state transverse momentum magnitude and transverse correlations can be inspected respectively. Using the new method the contributions of the azimuthal correlations of particles distribution and the correlations of transverse momentum magnitude of final state particles to high-order collective-flow correlations are analyzed respectively with 4π experimental events for 1.2 A GeV Ar + BaI 2 collisions at the Bevalac stream chamber. Comparing with the correlations of transverse momentum magnitude, the azimuthal correlations of final state particles distribution dominate high-order collective-flow correlations in experimental samples. The contributions of correlations of transverse momentum magnitude of final state particles not only enhance the strength of the high-order correlations of particle group, but also provide important information for the measurement of the collectivity of collective flow within the more constraint district
van den Hoven, Mariëtte; Kole, Jos
2015-01-01
The method of reflective equilibrium (RE) is well known within the domain of moral philosophy, but hardly discussed as a method in professional ethics education. We argue that an interpersonal version of RE is very promising for professional ethics education. We offer several arguments to support
van den Hoven, Mariëtte; Kole, Jos
2015-01-01
The method of reflective equilibrium (RE) is well known within the domain of moral philosophy, but hardly discussed as a method in professional ethics education. We argue that an interpersonal version of RE is very promising for professional ethics education. We offer several arguments to support this claim. The first group of arguments focus on a…
Directory of Open Access Journals (Sweden)
P. Phani Bushan Rao
2011-01-01
Full Text Available Ranking fuzzy numbers are an important aspect of decision making in a fuzzy environment. Since their inception in 1965, many authors have proposed different methods for ranking fuzzy numbers. However, there is no method which gives a satisfactory result to all situations. Most of the methods proposed so far are nondiscriminating and counterintuitive. This paper proposes a new method for ranking fuzzy numbers based on the Circumcenter of Centroids and uses an index of optimism to reflect the decision maker's optimistic attitude and also an index of modality that represents the neutrality of the decision maker. This method ranks various types of fuzzy numbers which include normal, generalized trapezoidal, and triangular fuzzy numbers along with crisp numbers with the particularity that crisp numbers are to be considered particular cases of fuzzy numbers.
Tian, Jinyan; Li, Xiaojuan; Duan, Fuzhou; Wang, Junqian; Ou, Yang
2016-05-10
The rapid development of Unmanned Aerial Vehicle (UAV) remote sensing conforms to the increasing demand for the low-altitude very high resolution (VHR) image data. However, high processing speed of massive UAV data has become an indispensable prerequisite for its applications in various industry sectors. In this paper, we developed an effective and efficient seam elimination approach for UAV images based on Wallis dodging and Gaussian distance weight enhancement (WD-GDWE). The method encompasses two major steps: first, Wallis dodging was introduced to adjust the difference of brightness between the two matched images, and the parameters in the algorithm were derived in this study. Second, a Gaussian distance weight distribution method was proposed to fuse the two matched images in the overlap region based on the theory of the First Law of Geography, which can share the partial dislocation in the seam to the whole overlap region with an effect of smooth transition. This method was validated at a study site located in Hanwang (Sichuan, China) which was a seriously damaged area in the 12 May 2008 enchuan Earthquake. Then, a performance comparison between WD-GDWE and the other five classical seam elimination algorithms in the aspect of efficiency and effectiveness was conducted. Results showed that WD-GDWE is not only efficient, but also has a satisfactory effectiveness. This method is promising in advancing the applications in UAV industry especially in emergency situations.
The efficacy of downhill running as a method to enhance running economy in trained distance runners.
Shaw, Andrew J; Ingham, Stephen A; Folland, Jonathan P
2018-06-01
Running downhill, in comparison to running on the flat, appears to involve an exaggerated stretch-shortening cycle (SSC) due to greater impact loads and higher vertical velocity on landing, whilst also incurring a lower metabolic cost. Therefore, downhill running could facilitate higher volumes of training at higher speeds whilst performing an exaggerated SSC, potentially inducing favourable adaptations in running mechanics and running economy (RE). This investigation assessed the efficacy of a supplementary 8-week programme of downhill running as a means of enhancing RE in well-trained distance runners. Nineteen athletes completed supplementary downhill (-5% gradient; n = 10) or flat (n = 9) run training twice a week for 8 weeks within their habitual training. Participants trained at a standardised intensity based on the velocity of lactate turnpoint (vLTP), with training volume increased incrementally between weeks. Changes in energy cost of running (E C ) and vLTP were assessed on both flat and downhill gradients, in addition to maximal oxygen uptake (⩒O 2max). No changes in E C were observed during flat running following downhill (1.22 ± 0.09 vs 1.20 ± 0.07 Kcal kg -1 km -1 , P = .41) or flat run training (1.21 ± 0.13 vs 1.19 ± 0.12 Kcal kg -1 km -1 ). Moreover, no changes in E C during downhill running were observed in either condition (P > .23). vLTP increased following both downhill (16.5 ± 0.7 vs 16.9 ± 0.6 km h -1 , P = .05) and flat run training (16.9 ± 0.7 vs 17.2 ± 1.0 km h -1 , P = .05), though no differences in responses were observed between groups (P = .53). Therefore, a short programme of supplementary downhill run training does not appear to enhance RE in already well-trained individuals.
Advanced cluster methods for correlated-electron systems
Energy Technology Data Exchange (ETDEWEB)
Fischer, Andre
2015-04-27
In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult
Yokoi, Naoaki; Kawahara, Yasuhiro; Hosaka, Hiroshi; Sakata, Kenji
Focusing on the Personal Handy-phone System (PHS) positioning service used in physical distribution logistics, a positioning error offset method for improving positioning accuracy is invented. A disadvantage of PHS positioning is that measurement errors caused by the fluctuation of radio waves due to buildings around the terminal are large, ranging from several tens to several hundreds of meters. In this study, an error offset method is developed, which learns patterns of positioning results (latitude and longitude) containing errors and the highest signal strength at major logistic points in advance, and matches them with new data measured in actual distribution processes according to the Mahalanobis distance. Then the matching resolution is improved to 1/40 that of the conventional error offset method.
Closeness and Distance: Using Close Reading as a Method of Educational Enquiry in English Studies
Brookman, Helen; Horn, Julia
2016-01-01
This article draws on a pedagogical case study in order to reflect on the value of using a Humanities disciplinary practice (the "close reading" of literary studies) as a method of educational enquiry and to provide a worked example of this approach. We explore the introduction of a pedagogic strategy--students writing abstracts for…
Maston, Heidi L.
2011-01-01
The 21st century ushered in change with the increased use of technology in educational delivery methods and opened doors for a new generation of students. While the debate over pedagogy, content design and overall effectiveness of this delivery format continues, scholars have not attended to the lessons of earlier theorists. This study examined a…
Birch, Joanne L; Walsh, Neville G; Cantrill, David J; Holmes, Gareth D; Murphy, Daniel J
2017-01-01
In Australia, Poaceae tribe Poeae are represented by 19 genera and 99 species, including economically and environmentally important native and introduced pasture grasses [e.g. Poa (Tussock-grasses) and Lolium (Ryegrasses)]. We used this tribe, which are well characterised in regards to morphological diversity and evolutionary relationships, to test the efficacy of DNA barcoding methods. A reference library was generated that included 93.9% of species in Australia (408 individuals, [Formula: see text] = 3.7 individuals per species). Molecular data were generated for official plant barcoding markers (rbcL, matK) and the nuclear ribosomal internal transcribed spacer (ITS) region. We investigated accuracy of specimen identifications using distance- (nearest neighbour, best-close match, and threshold identification) and tree-based (maximum likelihood, Bayesian inference) methods and applied species discovery methods (automatic barcode gap discovery, Poisson tree processes) based on molecular data to assess congruence with recognised species. Across all methods, success rate for specimen identification of genera was high (87.5-99.5%) and of species was low (25.6-44.6%). Distance- and tree-based methods were equally ineffective in providing accurate identifications for specimens to species rank (26.1-44.6% and 25.6-31.3%, respectively). The ITS marker achieved the highest success rate for specimen identification at both generic and species ranks across the majority of methods. For distance-based analyses the best-close match method provided the greatest accuracy for identification of individuals with a high percentage of "correct" (97.6%) and a low percentage of "incorrect" (0.3%) generic identifications, based on the ITS marker. For tribe Poeae, and likely for other grass lineages, sequence data in the standard DNA barcode markers are not variable enough for accurate identification of specimens to species rank. For recently diverged grass species similar challenges are
Sathyanarayana, Sheela; Grady, Richard; Redmon, J B; Ivicek, Kristy; Barrett, Emily; Janssen, Sarah; Nguyen, Ruby; Swan, Shanna H
2015-04-01
Anogenital distance (AGD) is an androgen responsive anatomic measurement that may have significant utility in clinical and epidemiological research studies. We describe development of standardized measurement methods and predictors of AGD outcomes. We examined infants born to 758 participants in The Infant Development and the Environment Study (TIDES cohort) in four clinical centers in 2011-2013. We developed and implemented a detailed training protocol that incorporated multiple quality control (QC) measures. In males, we measured anoscrotal distance (AGDAS), anopenile distance (AGDAP), and penile width (PW) and in females, anofourchette distance (AGDAF,) and anoclitoral distance (AGDAC). A single examiner obtained three repetitions of all measurements, and a second examiner obtained independent measurements for 14% of infants. We used the intra-rater ICC to assess within-examiner variability and the inter-rater ICC to assess between-examiner variability. We used multivariable linear regression to examine predictors of AGD outcomes including: gestational age at birth, birth weight, gestational age, several measures of body size, race, maternal age, and study center. In the full TIDES cohort, including 758 mothers and children, significant predictors of AGD and PW included: age at exam, gestational age at birth, weight-for-length Z-score, maternal age and study center. In 371 males, the mean (SD) AGDAS, AGDAP, and PW were 24.7 (4.5), 49.6 (5.9), and 10.8 (1.3) mm, respectively. In 387 females, the mean (SD) AGDAF and AGDAC were 16.0 (3.2) mm and 36.7 (3.8) mm, respectively. The intra-examiner ICC and inter-examiner ICC averaged over all subjects and examiners were between 0.89-0.92 and 0.69-0.84 respectively. Our study confirms that with appropriate training and quality control measures, AGD and PW measurements can be performed reliably and accurately in male and female infants. In order for reliable interpretation, these measurements should be adjusted for
International Nuclear Information System (INIS)
Lan Yihua; Li Cunhua; Ren Haozheng; Zhang Yong; Min Zhifang
2012-01-01
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose–volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose–volume constraints, and then the dose constraints for the voxels violating the dose–volume constraints are gradually added into the quadratic optimization model step by step until all the dose–volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head–neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose
Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui
2018-02-01
An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.
Lifetime measurement in {sup 168}Yb using the recoil distance Doppler shift (RDDS) method
Energy Technology Data Exchange (ETDEWEB)
Reese, Michael; Moeller, Oliver; Pietralla, Norbert [TU Darmstadt (Germany); Dewald, Alfred; Pissulla, Thomas [Universitaet Koeln (Germany); Petkov, Pavel [Universitaet Koeln (Germany); INRNE, Bulgarian Academy of Sciences, Sofia (Bulgaria)
2009-07-01
In the analysis of coincidence RDDS experiments one uses the Differential Decay Curve (DDC) Method to determine lifetimes of excited states. Experiments with small recoil velocities, thus small Doppler shifts, enforce the use of narrow coincidence gates to determine peak intensities. This results in a loss of statistics. As an alternative to the application of gates, we present the fit of 2-dimensional functions to the {gamma}{gamma} coincidence data. This approach has been studied on data taken in a RDDS measurement for the ground state band of {sup 168}Yb. The {sup 18}O({sup 154}Sm,4n){sup 168}Yb{sup *} fusion evaporation reaction was induced by an 80 MeV ion beam of the tandem accelerator facility in Cologne. The target was mounted in the Cologne coincidence plunger device. Lifetimes from the 4{sub 1}{sup +} to the 10{sub 1}{sup +} states have been extracted. The method is discussed and the results are compared to the CBS rotor model in the context of centrifugal stretching.
Directory of Open Access Journals (Sweden)
E. V. Guseva
2016-01-01
Full Text Available Currently the information technologies have penetrated to all spheres of human activity, including education. The main objective of the article is to show the advantages of the developed complex and to familiarize with its structure too. The article presents the arguments that the use of the distance learning tools has a significant impact on Russian education. This approach provides the conditions for the development of innovative teaching methods. The approach describes the capabilities offered by the virtual education center of distance learning Moodle too. It is attractive not only openness but because it contains a large set of libraries, classes and functions in the programming language PHP too, which makes it a convenient tool for developing various online information systems. It is shown that the effectiveness of distance learning depends on the organization of educational material. The basic modules of the course were underlined. This section provides a comprehensive understanding of material. For the verification and control of students knowledge the testing system was developed. In addition, the training package has been developed which contains the information, helping to assess the level of students knowledge. The testing system includes a list of tests divided into sections and consists of a set of questions of different complexity. The questions are stored in the single database (“The bank of questions” and can be reused in one or more courses or sections. After passing the correct answers to the test questions can be available for the student. In addition, this module includes tools for grading by the teacher. The article concludes that the virtual educational complex enables to teach students, has a friendly interface that stimulate the students to continue the work and its successful completion.
Directory of Open Access Journals (Sweden)
Andrew Leaver-Fay
2015-12-01
Full Text Available Membrane proteins make up approximately one third of all proteins, and they play key roles in a plethora of physiological processes. However, membrane proteins make up less than 2% of experimentally determined structures, despite significant advances in structure determination methods, such as X-ray crystallography, nuclear magnetic resonance spectroscopy, and cryo-electron microscopy. One potential alternative means of structure elucidation is to combine computational methods with experimental EPR data. In 2011, Hirst and others introduced RosettaEPR and demonstrated that this approach could be successfully applied to fold soluble proteins. Furthermore, few computational methods for de novo folding of integral membrane proteins have been presented. In this work, we present RosettaTMH, a novel algorithm for structure prediction of helical membrane proteins. A benchmark set of 34 proteins, in which the proteins ranged in size from 91 to 565 residues, was used to compare RosettaTMH to Rosetta’s two existing membrane protein folding protocols: the published RosettaMembrane folding protocol (“MembraneAbinitio” and folding from an extended chain (“ExtendedChain”. When EPR distance restraints are used, RosettaTMH+EPR outperforms ExtendedChain+EPR for 11 proteins, including the largest six proteins tested. RosettaTMH+EPR is capable of achieving native-like folds for 30 of 34 proteins tested, including receptors and transporters. For example, the average RMSD100SSE relative to the crystal structure for rhodopsin was 6.1 ± 0.4 Å and 6.5 ± 0.6 Å for the 449-residue nitric oxide reductase subunit B, where the standard deviation reflects variance in RMSD100SSE values across ten different EPR distance restraint sets. The addition of RosettaTMH and RosettaTMH+EPR to the Rosetta family of de novo folding methods broadens the scope of helical membrane proteins that can be accurately modeled with this software suite.
Shen, Tonghao; Su, Neil Qiang; Wu, Anan; Xu, Xin
2014-03-05
In this work, we first review the perturbative treatment of an oscillator with cubic anharmonicity. It is shown that there is a quantum-classical correspondence in terms of mean displacement, mean-squared displacement, and the corresponding variance in the first-order perturbation theory, provided that the amplitude of the classical oscillator is fixed at the zeroth-order energy of quantum mechanics EQM (0). This correspondence condition is realized by proposing the extended Langevin dynamics (XLD), where the key is to construct a proper driving force. It is assumed that the driving force adopts a simple harmonic form with its amplitude chosen according to EQM (0), while the driving frequency chosen as the harmonic frequency. The latter can be improved by using the natural frequency of the system in response to the potential if its anharmonicity is strong. By comparing to the accurate numeric results from discrete variable representation calculations for a set of diatomic species, it is shown that the present method is able to capture the large part of anharmonicity, being competitive with the wave function-based vibrational second-order perturbation theory, for the whole frequency range from ∼4400 cm(-1) (H2 ) to ∼160 cm(-1) (Na2 ). XLD shows a substantial improvement over the classical molecular dynamics which ceases to work for hard mode when zero-point energy effects are significant. Copyright © 2013 Wiley Periodicals, Inc.
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
Butz, Andre; Solvejg Dinger, Anna; Bobrowski, Nicole; Kostinek, Julian; Fieber, Lukas; Fischerkeller, Constanze; Giuffrida, Giovanni Bruno; Hase, Frank; Klappenbach, Friedrich; Kuhn, Jonas; Lübcke, Peter; Tirpitz, Lukas; Tu, Qiansi
2017-04-01
Remote sensing of CO2 enhancements in volcanic plumes can be a tool to estimate volcanic CO2 emissions and thereby, to gain insight into the geological carbon cycle and into volcano interior processes. However, remote sensing of the volcanic CO2 is challenged by the large atmospheric background concentrations masking the minute volcanic signal. Here, we report on a demonstrator study conducted in September 2015 at Mt. Etna on Sicily, where we deployed an EM27/SUN Fourier Transform Spectrometer together with a UV spectrometer on a mobile remote sensing platform. The spectrometers were operated in direct-sun viewing geometry collecting cross-sectional scans of solar absorption spectra through the volcanic plume by operating the platform in stop-and-go patterns in 5 to 10 kilometers distance from the crater region. We successfully detected correlated intra-plume enhancements of CO2 and volcanic SO2, HF, HCl, and BrO. The path-integrated volcanic CO2 enhancements amounted to about 0.5 ppm (on top of the ˜400 ppm background). Key to successful detection of volcanic CO2 was A) the simultaneous observation of the O2 total column which allowed for correcting changes in the CO2 column caused by changes in observer altitude and B) the simultaneous measurement of volcanic species co-emitted with CO2 which allowed for discriminating intra-plume and extra-plume observations. The latter were used for subtracting the atmospheric CO2 background. The field study suggests that our remote sensing observatory is a candidate technique for volcano monitoring in safe distance from the crater region.
Gottschlich, Carsten; Schuhmacher, Dominic
2014-01-01
Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.
Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko
2017-06-01
The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.
Studies in the method of correlated basis functions. Pt. 3
International Nuclear Information System (INIS)
Krotscheck, E.; Clark, J.W.
1980-01-01
A variational theory of pairing phenomena is presented for systems like neutron matter and liquid 3 He. The strong short-range correlations among the particles in these systems are incorporated into the trial states describing normal and pair-condensed phases, via a correlation operator F. The resulting theory has the same basic structure as that ordinarily applied for weak two-body interactions; in place of the pairing matrix elements of the bare interaction one finds certain effective pairing matrix elements Psub(kl), and modified single particle energies epsilon (k) appear. Detailed prescriptions are given for the construction of the Psub(kl) and epsilon (k) in terms of off-diagonal and diagonal matrix elements of the Hamiltonian and unit operators in a correlated basis of normal states. An exact criterion for instability of the assumed normal phase with respect to pair condensation is derived for general F. This criterion is investigated numerically for the special case if Jastrow correlations, the required normal-state quantities being evaluated by integral equation techniques which extend the Fermi hypernetted-chain scheme. In neutron matter, an instability with respect to 1 S 0 pairing is found in the low-density region, in concert with the predictions of Yang and Clark. In liquid 3 He, there is some indication of a 3 P 0 pairing instability in the vicinity of the experimental equilibrium density. (orig.)
Application of digital image correlation method for analysing crack ...
Indian Academy of Sciences (India)
centrated strain by imitating the treatment of micro-cracks using the finite element ... water and moisture to penetrate the concrete leading to serious rust of the ... The correlations among various grey values of digital images are analysed for ...
Directory of Open Access Journals (Sweden)
Nakanishi Koichi
2012-06-01
Full Text Available Abstract Background Despite the availability of conventional devices for making single-cell manipulations, determining the hardness of a single cell remains difficult. Here, we consider the cell to be a linear elastic body and apply Young’s modulus (modulus of elasticity, which is defined as the ratio of the repulsive force (stress in response to the applied strain. In this new method, a scanning probe microscope (SPM is operated with a cantilever in the “contact-and-push” mode, and the cantilever is applied to the cell surface over a set distance (applied strain. Results We determined the hardness of the following bacterial cells: Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, and five Bacillus spp. In log phase, these strains had a similar Young’s modulus, but Bacillus spp. spores were significantly harder than the corresponding vegetative cells. There was a positive, linear correlation between the hardness of bacterial spores and heat or ultraviolet (UV resistance. Conclusions Using this technique, the hardness of a single vegetative bacterial cell or spore could be determined based on Young’s modulus. As an application of this technique, we demonstrated that the hardness of individual bacterial spores was directly proportional to heat and UV resistance, which are the conventional measures of physical durability. This technique allows the rapid and direct determination of spore durability and provides a valuable and innovative method for the evaluation of physical properties in the field of microbiology.
Tarmizi, S. N. M.; Asmat, A.; Sumari, S. M.
2014-02-01
PM10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM10) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM10 concentration distribution were determined by using ArcGIS 9.3. The higher PM10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM10 distribution.
Directory of Open Access Journals (Sweden)
Tais de Andrade
2018-01-01
Full Text Available This study takes into account the factors that interfere with career decisions from the internal demands of individuals, represented by the work values. Investigated university students’ perspective on work values and career anchors. For this, we used a survey along the 958 undergraduate students of face and distance higher education institutions in the interior of Rio Grande do Sul. The data collection instrument was developed from the Work Values Scale Revised developed by Porto and Pilati (2010 and the Inventory of Career Anchors proposed by Schein (1993; 1996. As for the results, there were significant differences between the perceptions of the two methods of teaching, but the hierarchy assigned to the values and anchors in both was similar, which shows certain pattern of agreement as to the perception of importance attached to each dimension studied.
International Nuclear Information System (INIS)
Tarmizi, S N M; Asmat, A; Sumari, S M
2014-01-01
PM 10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM 10 ) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM 10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM 10 concentration distribution were determined by using ArcGIS 9.3. The higher PM 10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM 10 distribution
International Nuclear Information System (INIS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-01-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally. -- Highlights: • A rapid interpolation method of system matrices (H) is proposed, named DW-GIMGPE. • Reduce H acquisition time by 15.2× with simplified grid scan and 2× interpolation. • Reconstructions of a hot-rod phantom with measured and DW-GIMGPE H were similar. • The imaging study of normal
Hao, Huadong; Shi, Haolei; Yi, Pengju; Liu, Ying; Li, Cunjun; Li, Shuguang
2018-01-01
A Volume Metrology method based on Internal Electro-optical Distance-ranging method is established for large vertical energy storage tank. After analyzing the vertical tank volume calculation mathematical model, the key processing algorithms, such as gross error elimination, filtering, streamline, and radius calculation are studied for the point cloud data. The corresponding volume values are automatically calculated in the different liquids by calculating the cross-sectional area along the horizontal direction and integrating from vertical direction. To design the comparison system, a vertical tank which the nominal capacity is 20,000 m3 is selected as the research object, and there are shown that the method has good repeatability and reproducibility. Through using the conventional capacity measurement method as reference, the relative deviation of calculated volume is less than 0.1%, meeting the measurement requirements. And the feasibility and effectiveness are demonstrated.
A new quantum statistical evaluation method for time correlation functions
International Nuclear Information System (INIS)
Loss, D.; Schoeller, H.
1989-01-01
Considering a system of N identical interacting particles, which obey Fermi-Dirac or Bose-Einstein statistics, the authors derive new formulas for correlation functions of the type C(t) = i= 1 N A i (t) Σ j=1 N B j > (where B j is diagonal in the free-particle states) in the thermodynamic limit. Thereby they apply and extend a superoperator formalism, recently developed for the derivation of long-time tails in semiclassical systems. As an illustrative application, the Boltzmann equation value of the time-integrated correlation function C(t) is derived in a straight-forward manner. Due to exchange effects, the obtained t-matrix and the resulting scattering cross section, which occurs in the Boltzmann collision operator, are now functionals of the Fermi-Dirac or Bose-Einstein distribution
Pair Correlation Function Integrals
DEFF Research Database (Denmark)
Wedberg, Nils Hejle Rasmus Ingemar; O'Connell, John P.; Peters, Günther H.J.
2011-01-01
We describe a method for extending radial distribution functions obtained from molecular simulations of pure and mixed molecular fluids to arbitrary distances. The method allows total correlation function integrals to be reliably calculated from simulations of relatively small systems. The long......-distance behavior of radial distribution functions is determined by requiring that the corresponding direct correlation functions follow certain approximations at long distances. We have briefly described the method and tested its performance in previous communications [R. Wedberg, J. P. O’Connell, G. H. Peters......, and J. Abildskov, Mol. Simul. 36, 1243 (2010); Fluid Phase Equilib. 302, 32 (2011)], but describe here its theoretical basis more thoroughly and derive long-distance approximations for the direct correlation functions. We describe the numerical implementation of the method in detail, and report...
Flow velocity measurement by using zero-crossing polarity cross correlation method
International Nuclear Information System (INIS)
Xu Chengji; Lu Jinming; Xia Hong
1993-01-01
Using the designed correlation metering system and a high accurate hot-wire anemometer as a calibration device, the experimental study of correlation method in a tunnel was carried out. The velocity measurement of gas flow by using zero-crossing polarity cross correlation method was realized and the experimental results has been analysed
Shobugawa, Yugo; Wiafe, Seth A; Saito, Reiko; Suzuki, Tsubasa; Inaida, Shinako; Taniguchi, Kiyosu; Suzuki, Hiroshi
2012-06-19
Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD) from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was -0.75, P = 0.012) and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045), but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant). It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an epidemic reaches its peak, WSD value changed to be a
Directory of Open Access Journals (Sweden)
Shobugawa Yugo
2012-06-01
Full Text Available Abstract Background Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. Results We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was −0.75, P = 0.012 and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045, but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant. It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an
2012-01-01
Background Annual influenza epidemics occur worldwide resulting in considerable morbidity and mortality. Spreading pattern of influenza is not well understood because it is often hampered by the quality of surveillance data that limits the reliability of analysis. In Japan, influenza is reported on a weekly basis from 5,000 hospitals and clinics nationwide under the scheme of the National Infectious Disease Surveillance. The collected data are available to the public as weekly reports which were summarized into number of patient visits per hospital or clinic in each of the 47 prefectures. From this surveillance data, we analyzed the spatial spreading patterns of influenza epidemics using weekly weighted standard distance (WSD) from the 1999/2000 through 2008/2009 influenza seasons in Japan. WSD is a single numerical value representing the spatial compactness of influenza outbreak, which is small in case of clustered distribution and large in case of dispersed distribution. Results We demonstrated that the weekly WSD value or the measure of spatial compactness of the distribution of reported influenza cases, decreased to its lowest value before each epidemic peak in nine out of ten seasons analyzed. The duration between the lowest WSD week and the peak week of influenza cases ranged from minus one week to twenty weeks. The duration showed significant negative association with the proportion of influenza A/H3N2 cases in early phase of each outbreak (correlation coefficient was −0.75, P = 0.012) and significant positive association with the proportion of influenza B cases in the early phase (correlation coefficient was 0.64, P = 0.045), but positively correlated with the proportion of influenza A/H1N1 strain cases (statistically not significant). It is assumed that the lowest WSD values just before influenza peaks are due to local outbreak which results in small standard distance values. As influenza cases disperse nationwide and an epidemic reaches its peak
Mackeen, Mukram; Almond, Andrew; Cumpstey, Ian; Enis, Seth C; Kupce, Eriks; Butters, Terry D; Fairbanks, Antony J; Dwek, Raymond A; Wormald, Mark R
2006-06-07
The experimental determination of oligosaccharide conformations has traditionally used cross-linkage 1H-1H NOE/ROEs. As relatively few NOEs are observed, to provide sufficient conformational constraints this method relies on: accurate quantification of NOE intensities (positive constraints); analysis of absent NOEs (negative constraints); and hence calculation of inter-proton distances using the two-spin approximation. We have compared the results obtained by using 1H 2D NOESY, ROESY and T-ROESY experiments at 500 and 700 MHz to determine the conformation of the terminal Glc alpha1-2Glc alpha linkage in a dodecasaccharide and a related tetrasaccharide. For the tetrasaccharide, the NOESY and ROESY spectra produced the same qualitative pattern of linkage cross-peaks but the quantitative pattern, the relative peak intensities, was different. For the dodecasaccharide, the NOESY and ROESY spectra at 500 MHz produced a different qualitative pattern of linkage cross-peaks, with fewer peaks in the NOESY spectrum. At 700 MHz, the NOESY and ROESY spectra of the dodecasaccharide produced the same qualitative pattern of peaks, but again the relative peak intensities were different. These differences are due to very significant differences in the local correlation times for different proton pairs across this glycosidic linkage. The local correlation time for each proton pair was measured using the ratio of the NOESY and T-ROESY cross-relaxation rates, leaving the NOESY and ROESY as independent data sets for calculating the inter-proton distances. The inter-proton distances calculated including the effects of differences in local correlation times give much more consistent results.
Hafezalkotob, Arian; Hafezalkotob, Ashkan
2017-06-01
A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.
Wall, Phillip D. H.; Carver, Robert L.; Fontenot, Jonas D.
2018-01-01
The overlap volume histogram (OVH) is an anatomical metric commonly used to quantify the geometric relationship between an organ at risk (OAR) and target volume when predicting expected dose-volumes in knowledge-based planning (KBP). This work investigated the influence of additional variables contributing to variations in the assumed linear DVH-OVH correlation for the bladder and rectum in VMAT plans of prostate patients, with the goal of increasing prediction accuracy and achievability of knowledge-based planning methods. VMAT plans were retrospectively generated for 124 prostate patients using multi-criteria optimization. DVHs quantified patient dosimetric data while OVHs quantified patient anatomical information. The DVH-OVH correlations were calculated for fractional bladder and rectum volumes of 30, 50, 65, and 80%. Correlations between potential influencing factors and dose were quantified using the Pearson product-moment correlation coefficient (R). Factors analyzed included the derivative of the OVH, prescribed dose, PTV volume, bladder volume, rectum volume, and in-field OAR volume. Out of the selected factors, only the in-field bladder volume (mean R = 0.86) showed a strong correlation with bladder doses. Similarly, only the in-field rectal volume (mean R = 0.76) showed a strong correlation with rectal doses. Therefore, an OVH formalism accounting for in-field OAR volumes was developed to determine the extent to which it improved the DVH-OVH correlation. Including the in-field factor improved the DVH-OVH correlation, with the mean R values over the fractional volumes studied improving from -0.79 to -0.85 and -0.82 to -0.86 for the bladder and rectum, respectively. A re-planning study was performed on 31 randomly selected database patients to verify the increased accuracy of KBP dose predictions by accounting for bladder and rectum volume within treatment fields. The in-field OVH led to significantly more precise
Directory of Open Access Journals (Sweden)
Eric Ariel L. Salas
2013-12-01
Full Text Available We present the Moment Distance (MD method to advance spectral analysis in vegetation studies. It was developed to take advantage of the information latent in the shape of the reflectance curve that is not available from other spectral indices. Being mathematically simple but powerful, the approach does not require any curve transformation, such as smoothing or derivatives. Here, we show the formulation of the MD index (MDI and demonstrate its potential for vegetation studies. We simulated leaf and canopy reflectance samples derived from the combination of the PROSPECT and SAIL models to understand the sensitivity of the new method to leaf and canopy parameters. We observed reasonable agreements between vegetation parameters and the MDI when using the 600 to 750 nm wavelength range, and we saw stronger agreements in the narrow red-edge region 720 to 730 nm. Results suggest that the MDI is more sensitive to the Chl content, especially at higher amounts (Chl > 40 mg/cm2 compared to other indices such as NDVI, EVI, and WDRVI. Finally, we found an indirect relationship of MDI against the changes of the magnitude of the reflectance around the red trough with differing values of LAI.
Directory of Open Access Journals (Sweden)
Igor A. Valdman
2017-01-01
Full Text Available In Russia and abroad the teamwork gained popularity in the labor market as a form of collective interaction between multiprofessional groups of specialists in implementing business projects, carrying out research and development projects, designing technological solutions and creating innovative products. At the same time, in the educational practice, especially when using distant educational technologies, the command method of instruction is quite rare. The reason for this is that the teamwork in the implementation of educational programs requires fixating individual educational outcomes of each trainee, their contribution to the performance of the group task. It complicates the organization of the educational process. As the result, educational organizations do not often use this educational form because of the complexity of its application in the conduct of intermediate and final attestation.Research goal. search and validation of a problem solution that can be formulated as a contradiction between the need to perform group homework assignments in distant learning and the necessity to fix the individual educational results of each trainee for the purpose of intermediate and final attestation. The authors of the article offer basic methodological principles that allow finding the balance in-between the requirements of legislation and preserving the team approach in the process of group work of trainees.Materials and methods. The initial materials of the research are an overview of existing publications on the organization of teamwork of trainees is used, including the implementation of training in a long distance format, the legislation of the Russian Federation regarding interim and final certification of trainees, as well as practical experience in implementing training programs, based on ANO “E-learning for Nanoindustry (“eNano””. Based on these materials, the authors offer basic methodological principles, obtained empirically and
International Nuclear Information System (INIS)
Haftel, M.I.; Mandelzweig, V.B.
1990-01-01
The local convergence and accuracy of wave functions obtained by direct solution of the Schroedinger equation with the help of the correlation-function hyperspherical-harmonic method are analyzed for ground and excited states of the helium atom and for the ground state of the positronium negative ion. The inclusion of the cusp conditions into the correlation function is shown to be of crucial importance, not only near the coalescence points, but also away from them. The proper inclusion of all cusps yields for the ground state of the helium atom the local wave-function accuracy of about 10 -7 for different interparticle distances. The omission of one of the cusps in the excited helium atom reduces the wave-function precision to 10 -2 near the corresponding coalescence point and to 10 -4 --10 -5 away from it
Xie, Qimiao; Wang, Jinhui; Lu, Shouxiang; Hensen, J.L.M.
2016-01-01
The distance between exits is an important design parameter in fire safety design of buildings. In order to find the optimal distance between exits under uncertainties with a low computational cost, the surrogate model (i.e. approximation model) of evacuation time is constructed by the arbitrary
Multi-level Correlates of Safer Conception Methods Awareness and ...
African Journals Online (AJOL)
Many people living with HIV desire childbearing, but low cost safer conception methods (SCM) such as timed unprotected intercourse (TUI) and manual ... including perceived willingness to use SCM, knowledge of respondent's HIV status, HIV-seropositivity, marriage and equality in decision making within the relationship.
Fast methods for spatially correlated multilevel functional data
Staicu, A.-M.; Crainiceanu, C. M.; Carroll, R. J.
2010-01-01
-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where
Correlates of the Rosenberg Self-Esteem Scale Method Effects
Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan
2006-01-01
Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…
Method for numerical simulation of two-term exponentially correlated colored noise
International Nuclear Information System (INIS)
Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.
2006-01-01
A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications
International Nuclear Information System (INIS)
Hassenstein, A.; Richard, G.; Inhoffen, W.; Scholz, F.
2007-01-01
The new integration method (DIM) provides for the first time the anatomically precise integration of the OCT-scan position into the angiogram (fluorescein angiography, FLA), using reference marker at corresponding vessel crossings. Therefore an exact correlation of angiographic and morphological pathological findings is possible und leads to a better understanding of OCT and FLA. Occult findings in FLA were the patient group which profited most. Occult leakages could gain additional information using DIM such as serous detachment of the retinal pigment epithelium (RPE) in a topography. So far it was unclear whether the same localization in the lesion was examined by FLA and OCT especially when different staff were performing and interpreting the examination. Using DIM this problem could be solved using objective markers. This technique is the requirement for follow-up examinations by OCT. Using DIM for an objective, reliable and precise correlation of OCT and FLA-findings it is now possible to provide the identical scan-position in follow-up. Therefore for follow-up in clinical studies it is mandatory to use DIM to improve the evidence-based statement of OCT and the quality of the study. (author) [de
International Nuclear Information System (INIS)
Teixeira, Sara R.; Dalto, Vitor F.; Maranho, Daniel A.; Zoghbi-Neto, Orlando S.; Volpon, José B.; Nogueira-Barbosa, Marcello H.
2015-01-01
Highlights: • The article adds information about the pubo-femoral distance (PFD) used as a simple tool to detect dysplastic hips in neonates. This articles shows that the PFD is comparable with the “gold standard” Graf method for the diagnosis of developmental dysplasia of the hip and it can be used as a screening tool for its diagnosis, regardless the radiologists’ experience, with high accuracy. - Abstract: Purposes: To evaluate whether the pubo-femoral distance (PFD) can be used as an accurate screening test to diagnose developmental dysplasia of the hip (DDH) in an at-risk population compared with the Graf method. Second, to determine whether PFD assessment is feasible and reproducible regardless of the observer's experience. Materials and methods: IRB approved this retrospective single-institution study. Written informed consent was waived. Between January 2010 and March 2012, 116 neonates at risk for DDH were included. Infants’ hips were distributed into two groups according to recommendation for treatment: non-dysplastic (ND; Graf I/IIA; 211 hips; 69 females/37 males) and dysplastic hip (DH; Graf IIB/IIC/III/D/IV; 21 hips; 8 females/3 males). One resident and one experienced radiologist reviewed ultrasonography images performed in the fourth week. To compare the groups, Student's t and Mann–Whitney tests for normally and non-normally distributed covariates were performed. Accuracy of PFD to diagnose DDH was calculated. Intraclass correlation coefficient (ICC) was calculated to assess inter-observer agreement. Results: Mean PFDs of ND group were 3.09 mm at neutral position and 3.64 mm with the hip flexed. Mean PFDs of DH group were 6.29 mm and 7.59 mm, respectively. Sensitivity, specificity, and accuracy of PFD were 94.4%, 93.4%, and 97.2% (cut-off = 4.6 mm) at neutral position and 94.4%, 89.0%, and 95.5% (cut-off = 4.9 mm) with hip flexed. ICCs were 0.852 and 0.864, respectively. Conclusions: PFD is comparable with Graf method, enabling
Energy Technology Data Exchange (ETDEWEB)
Teixeira, Sara R., E-mail: steixeira@hcrp.usp.br [Division of Radiology, Department of Internal Medicine, Ribeirao Preto Medical School, University of Sao Paulo, Av. Bandeirantes, 3900 Ribeirao Preto, Sao Paulo 14049-090 (Brazil); Dalto, Vitor F., E-mail: fdalto@gmail.com [Division of Radiology, Department of Internal Medicine, Ribeirao Preto Medical School, University of Sao Paulo, Av. Bandeirantes, 3900 Ribeirao Preto, Sao Paulo 14049-090 (Brazil); Maranho, Daniel A., E-mail: dacmaranho@gmail.com [Division of Pediatric Orthopaedics, Department of Biomechanics, Medicine, and Rehabilitation of the Locomotor System, Ribeirao Preto Medical School, University of Sao Paulo, Av. Bandeirantes, 3900 Ribeirao Preto, Sao Paulo 14049-090 (Brazil); Zoghbi-Neto, Orlando S., E-mail: zoghbi47@gmail.com [Division of Radiology, Department of Internal Medicine, Ribeirao Preto Medical School, University of Sao Paulo, Av. Bandeirantes, 3900 Ribeirao Preto, Sao Paulo 14049-090 (Brazil); Volpon, José B., E-mail: jbvolpon@fmrp.usp.br [Division of Pediatric Orthopaedics, Department of Biomechanics, Medicine, and Rehabilitation of the Locomotor System, Ribeirao Preto Medical School, University of Sao Paulo, Av. Bandeirantes, 3900 Ribeirao Preto, Sao Paulo 14049-090 (Brazil); Nogueira-Barbosa, Marcello H., E-mail: marcello@fmrp.usp.br [Division of Radiology, Department of Internal Medicine, Ribeirao Preto Medical School, University of Sao Paulo, Av. Bandeirantes, 3900 Ribeirao Preto, Sao Paulo 14049-090 (Brazil)
2015-02-15
Highlights: • The article adds information about the pubo-femoral distance (PFD) used as a simple tool to detect dysplastic hips in neonates. This articles shows that the PFD is comparable with the “gold standard” Graf method for the diagnosis of developmental dysplasia of the hip and it can be used as a screening tool for its diagnosis, regardless the radiologists’ experience, with high accuracy. - Abstract: Purposes: To evaluate whether the pubo-femoral distance (PFD) can be used as an accurate screening test to diagnose developmental dysplasia of the hip (DDH) in an at-risk population compared with the Graf method. Second, to determine whether PFD assessment is feasible and reproducible regardless of the observer's experience. Materials and methods: IRB approved this retrospective single-institution study. Written informed consent was waived. Between January 2010 and March 2012, 116 neonates at risk for DDH were included. Infants’ hips were distributed into two groups according to recommendation for treatment: non-dysplastic (ND; Graf I/IIA; 211 hips; 69 females/37 males) and dysplastic hip (DH; Graf IIB/IIC/III/D/IV; 21 hips; 8 females/3 males). One resident and one experienced radiologist reviewed ultrasonography images performed in the fourth week. To compare the groups, Student's t and Mann–Whitney tests for normally and non-normally distributed covariates were performed. Accuracy of PFD to diagnose DDH was calculated. Intraclass correlation coefficient (ICC) was calculated to assess inter-observer agreement. Results: Mean PFDs of ND group were 3.09 mm at neutral position and 3.64 mm with the hip flexed. Mean PFDs of DH group were 6.29 mm and 7.59 mm, respectively. Sensitivity, specificity, and accuracy of PFD were 94.4%, 93.4%, and 97.2% (cut-off = 4.6 mm) at neutral position and 94.4%, 89.0%, and 95.5% (cut-off = 4.9 mm) with hip flexed. ICCs were 0.852 and 0.864, respectively. Conclusions: PFD is comparable with Graf method, enabling
Directory of Open Access Journals (Sweden)
Robert F. Love
2001-01-01
Full Text Available Distance predicting functions may be used in a variety of applications for estimating travel distances between points. To evaluate the accuracy of a distance predicting function and to determine its parameters, a goodness-of-fit criteria is employed. AD (Absolute Deviations, SD (Squared Deviations and NAD (Normalized Absolute Deviations are the three criteria that are mostly employed in practice. In the literature some assumptions have been made about the properties of each criterion. In this paper, we present statistical analyses performed to compare the three criteria from different perspectives. For this purpose, we employ the ℓkpθ-norm as the distance predicting function, and statistically compare the three criteria by using normalized absolute prediction error distributions in seventeen geographical regions. We find that there exist no significant differences between the criteria. However, since the criterion SD has desirable properties in terms of distance modelling procedures, we suggest its use in practice.
Oczeretko, Edward; Swiatecka, Jolanta; Kitlas, Agnieszka; Laudanski, Tadeusz; Pierzynski, Piotr
2006-01-01
In physiological research, we often study multivariate data sets, containing two or more simultaneously recorded time series. The aim of this paper is to present the cross-correlation and the wavelet cross-correlation methods to assess synchronization between contractions in different topographic regions of the uterus. From a medical point of view, it is important to identify time delays between contractions, which may be of potential diagnostic significance in various pathologies. The cross-correlation was computed in a moving window with a width corresponding to approximately two or three contractions. As a result, the running cross-correlation function was obtained. The propagation% parameter assessed from this function allows quantitative description of synchronization in bivariate time series. In general, the uterine contraction signals are very complicated. Wavelet transforms provide insight into the structure of the time series at various frequencies (scales). To show the changes of the propagation% parameter along scales, a wavelet running cross-correlation was used. At first, the continuous wavelet transforms as the uterine contraction signals were received and afterwards, a running cross-correlation analysis was conducted for each pair of transformed time series. The findings show that running functions are very useful in the analysis of uterine contractions.
A new method of spatio-temporal topographic mapping by correlation coefficient of K-means cluster.
Li, Ling; Yao, Dezhong
2007-01-01
It would be of the utmost interest to map correlated sources in the working human brain by Event-Related Potentials (ERPs). This work is to develop a new method to map correlated neural sources based on the time courses of the scalp ERPs waveforms. The ERP data are classified first by k-means cluster analysis, and then the Correlation Coefficients (CC) between the original data of each electrode channel and the time course of each cluster centroid are calculated and utilized as the mapping variable on the scalp surface. With a normalized 4-concentric-sphere head model with radius 1, the performance of the method is evaluated by simulated data. CC, between simulated four sources (s (1)-s (4)) and the estimated cluster centroids (c (1)-c (4)), and the distances (Ds), between the scalp projection points of the s (1)-s (4) and that of the c (1)-c (4), are utilized as the evaluation indexes. Applied to four sources with two of them partially correlated (with maximum mutual CC = 0.4892), CC (Ds) between s (1)-s (4) and c (1)-c (4) are larger (smaller) than 0.893 (0.108) for noise levels NSRclusters located at left, right occipital and frontal. The estimated vectors of the contra-occipital area demonstrate that attention to the stimulus location produces increased amplitude of the P1 and N1 components over the contra-occipital scalp. The estimated vector in the frontal area displays two large processing negativity waves around 100 ms and 250 ms when subjects are attentive, and there is a small negative wave around 140 ms and a P300 when subjects are unattentive. The results of simulations and real Visual Evoked Potentials (VEPs) data demonstrate the validity of the method in mapping correlated sources. This method may be an objective, heuristic and important tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.
Distance covariance for stochastic processes
DEFF Research Database (Denmark)
Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady
2017-01-01
The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...
TOPSIS with statistical distances: A new approach to MADM
Directory of Open Access Journals (Sweden)
Vijaya Babu Vommi
2017-01-01
Full Text Available Multiple attribute decision making (MADM methods are very useful in choosing the best alternative among the available finite but conflicting alternatives. TOPSIS is one of the MADM methods, which is simple in its methodology and logic. In TOPSIS, Euclidean distances of each alternative from the positive and negative ideal solutions are utilized to find the best alternative. In literature, apart from Euclidean distances, the city block distances have also been tried to find the separations measures. In general, the attribute data are distributed with unequal ranges and also possess moderate to high correlations. Hence, in the present paper, use of statistical distances is proposed in place of Euclidean distances. Procedures to find the best alternatives are developed using statistical and weighted statistical distances respectively. The proposed methods are illustrated with some industrial problems taken from literature. Results show that the proposed methods can be used as new alternatives in MADM for choosing the best solutions.
Crosson, E.; Rella, C.
2012-12-01
measurement of the emission rate. One example of this method is shown in Fig. 1. This method is simple to deploy, does not require an accurate model of atmospheric transport or knowledge of the distance to the emission source or its spatial distribution. Accurate measurements of the emissions can be made with just a few minutes of data collection. Results of controlled release methane experiments are presented, and the strengths and limitations of the methodology are discussed. REFERENCES R. Howarth, R. Santoro, and A. Ingraffea (2011): "Methane and the greenhouse-gas footprint of natural gas from shale formations," Climatic Change 106, 679 - 690. Fig 1: Spatial correlation analysis for two measurement points (or pixels) distributed vertically (A and B) or horizontally (A and C), for measurements at a distance of 21 meters from a methane point source of 650 sccm. The emission rate recovered from this analysis was 496 ± 160 sccm of CH4. The total measurement time was 30 minutes.
Wang, Zheng-Xin; Li, Dan-Dan; Zheng, Hong-Hao
2018-01-30
In China's industrialization process, the effective regulation of energy and environment can promote the positive externality of energy consumption while reducing negative externality, which is an important means for realizing the sustainable development of an economic society. The study puts forward an improved technique for order preference by similarity to an ideal solution based on entropy weight and Mahalanobis distance (briefly referred as E-M-TOPSIS). The performance of the approach was verified to be satisfactory. By separately using traditional and improved TOPSIS methods, the study carried out the empirical appraisals on the external performance of China's energy regulation during 1999~2015. The results show that the correlation between the performance indexes causes the significant difference between the appraisal results of E-M-TOPSIS and traditional TOPSIS. The E-M-TOPSIS takes the correlation between indexes into account and generally softens the closeness degree compared with traditional TOPSIS. Moreover, it makes the relative closeness degree fluctuate within a small-amplitude. The results conform to the practical condition of China's energy regulation and therefore the E-M-TOPSIS is favorably applicable for the external performance appraisal of energy regulation. Additionally, the external economic performance and social responsibility performance (including environmental and energy safety performances) based on the E-M-TOPSIS exhibit significantly different fluctuation trends. The external economic performance dramatically fluctuates with a larger fluctuation amplitude, while the social responsibility performance exhibits a relatively stable interval fluctuation. This indicates that compared to the social responsibility performance, the fluctuation of external economic performance is more sensitive to energy regulation.
Ochoa Gutierrez, L. H.; Vargas Jimenez, C. A.; Niño Vasquez, L. F.
2011-12-01
The "Sabana de Bogota" (Bogota Savannah) is the most important social and economical center of Colombia. Almost the third of population is concentrated in this region and generates about the 40% of Colombia's Internal Brute Product (IBP). According to this, the zone presents an elevated vulnerability in case that a high destructive seismic event occurs. Historical evidences show that high magnitude events took place in the past with a huge damage caused to the city and indicate that is probable that such events can occur in the next years. This is the reason why we are working in an early warning generation system, using the first few seconds of a seismic signal registered by three components and wide band seismometers. Such system can be implemented using Computational Intelligence tools, designed and calibrated to the particular Geological, Structural and environmental conditions present in the region. The methods developed are expected to work on real time, thus suitable software and electronic tools need to be developed. We used Support Vector Machines Regression (SVMR) methods trained and tested with historic seismic events registered by "EL ROSAL" Station, located near Bogotá, calculating descriptors or attributes as the input of the model, from the first 6 seconds of signal. With this algorithm, we obtained less than 10% of mean absolute error and correlation coefficients greater than 85% in hypocentral distance and Magnitude estimation. With this results we consider that we can improve the method trying to have better accuracy with less signal time and that this can be a very useful model to be implemented directly in the seismological stations to generate a fast characterization of the event, broadcasting not only raw signal but pre-processed information that can be very useful for accurate Early Warning Generation.
Dynamical correlations in finite nuclei: A simple method to study tensor effects
International Nuclear Information System (INIS)
Dellagiacoma, F.; Orlandini, G.; Traini, M.
1983-01-01
Dynamical correlations are introduced in finite nuclei by changing the two-body density through a phenomenological method. The role of tensor and short-range correlations in nuclear momentum distribution, electric form factor and two-body density of 4 He is investigated. The importance of induced tensor correlations in the total photonuclear cross section is reinvestigated providing a successful test of the method proposed here. (orig.)
Some new results on correlation-preserving factor scores prediction methods
Ten Berge, J.M.F.; Krijnen, W.P.; Wansbeek, T.J.; Shapiro, A.
1999-01-01
Anderson and Rubin and McDonald have proposed a correlation-preserving method of factor scores prediction which minimizes the trace of a residual covariance matrix for variables. Green has proposed a correlation-preserving method which minimizes the trace of a residual covariance matrix for factors.
International Nuclear Information System (INIS)
Santamaria, L.; Siller, H. R.; Garcia-Ortiz, C. E.; Cortes, R.; Coello, V.
2016-01-01
In this work, we present an alternative optical method to determine the probe-sample separation distance in a scanning near-field optical microscope. The experimental method is based in a Lloyd’s mirror interferometer and offers a measurement precision deviation of ∼100 nm using digital image processing and numerical analysis. The technique can also be strategically combined with the characterization of piezoelectric actuators and stability evaluation of the optical system. It also opens the possibility for the development of an automatic approximation control system valid for probe-sample distances from 5 to 500 μm.
Energy Technology Data Exchange (ETDEWEB)
Santamaria, L.; Siller, H. R. [Tecnológico de Monterrey, Eugenio Garza Sada 2501 Sur, Monterrey, N.L., 64849 (Mexico); Garcia-Ortiz, C. E., E-mail: cegarcia@cicese.mx [CONACYT Research Fellow – CICESE, Unidad Monterrey, Alianza Centro 504, Apodaca, NL, 66629 (Mexico); Cortes, R.; Coello, V. [CICESE, Unidad Monterrey, PIIT, Alianza Centro 504, Apodaca, NL, 66629 (Mexico)
2016-04-15
In this work, we present an alternative optical method to determine the probe-sample separation distance in a scanning near-field optical microscope. The experimental method is based in a Lloyd’s mirror interferometer and offers a measurement precision deviation of ∼100 nm using digital image processing and numerical analysis. The technique can also be strategically combined with the characterization of piezoelectric actuators and stability evaluation of the optical system. It also opens the possibility for the development of an automatic approximation control system valid for probe-sample distances from 5 to 500 μm.
International Nuclear Information System (INIS)
Chang, J; Gu, X; Lu, W; Jiang, S; Song, T
2016-01-01
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will
Energy Technology Data Exchange (ETDEWEB)
Chang, J; Gu, X; Lu, W; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Song, T [Southern Medical University, Guangzhou, Guangdong (China)
2016-06-15
Purpose: A novel distance-dose weighting method for label fusion was developed to increase segmentation accuracy in dosimetrically important regions for prostate radiation therapy. Methods: Label fusion as implemented in the original SIMPLE (OS) for multi-atlas segmentation relies iteratively on the majority vote to generate an estimated ground truth and DICE similarity measure to screen candidates. The proposed distance-dose weighting puts more values on dosimetrically important regions when calculating similarity measure. Specifically, we introduced distance-to-dose error (DDE), which converts distance to dosimetric importance, in performance evaluation. The DDE calculates an estimated DE error derived from surface distance differences between the candidate and estimated ground truth label by multiplying a regression coefficient. To determine the coefficient at each simulation point on the rectum, we fitted DE error with respect to simulated voxel shift. The DEs were calculated by the multi-OAR geometry-dosimetry training model previously developed in our research group. Results: For both the OS and the distance-dose weighted SIMPLE (WS) results, the evaluation metrics for twenty patients were calculated using the ground truth segmentation. The mean difference of DICE, Hausdorff distance, and mean absolute distance (MAD) between OS and WS have shown 0, 0.10, and 0.11, respectively. In partial MAD of WS which calculates MAD within a certain PTV expansion voxel distance, the lower MADs were observed at the closer distances from 1 to 8 than those of OS. The DE results showed that the segmentation from WS produced more accurate results than OS. The mean DE error of V75, V70, V65, and V60 were decreased by 1.16%, 1.17%, 1.14%, and 1.12%, respectively. Conclusion: We have demonstrated that the method can increase the segmentation accuracy in rectum regions adjacent to PTV. As a result, segmentation using WS have shown improved dosimetric accuracy than OS. The WS will
Feng, Dai; Svetnik, Vladimir; Coimbra, Alexandre; Baumgartner, Richard
2014-01-01
The intraclass correlation coefficient (ICC) with fixed raters or, equivalently, the concordance correlation coefficient (CCC) for continuous outcomes is a widely accepted aggregate index of agreement in settings with small number of raters. Quantifying the precision of the CCC by constructing its confidence interval (CI) is important in early drug development applications, in particular in qualification of biomarker platforms. In recent years, there have been several new methods proposed for construction of CIs for the CCC, but their comprehensive comparison has not been attempted. The methods consisted of the delta method and jackknifing with and without Fisher's Z-transformation, respectively, and Bayesian methods with vague priors. In this study, we carried out a simulation study, with data simulated from multivariate normal as well as heavier tailed distribution (t-distribution with 5 degrees of freedom), to compare the state-of-the-art methods for assigning CI to the CCC. When the data are normally distributed, the jackknifing with Fisher's Z-transformation (JZ) tended to provide superior coverage and the difference between it and the closest competitor, the Bayesian method with the Jeffreys prior was in general minimal. For the nonnormal data, the jackknife methods, especially the JZ method, provided the coverage probabilities closest to the nominal in contrast to the others which yielded overly liberal coverage. Approaches based upon the delta method and Bayesian method with conjugate prior generally provided slightly narrower intervals and larger lower bounds than others, though this was offset by their poor coverage. Finally, we illustrated the utility of the CIs for the CCC in an example of a wake after sleep onset (WASO) biomarker, which is frequently used in clinical sleep studies of drugs for treatment of insomnia.
Directory of Open Access Journals (Sweden)
O. Gnedkova
2014-06-01
Full Text Available The problem of constructing a new model of training process of future and highly competitive professionals in high school arises due to the global process of society informatization and the involvement of ICT in all spheres of human activity, including the educational process of high school. In conditions of Ukraine integration into the European educational space, the significant changes in the curriculum of training of professionals are happened, that the number of classroom training hours is reduced and the number of hours of self-study training is increasing. However, self-study learning raises many difficulties as for students and teachers, for example, the lack of guidance on the tasks for independent work, lack of consultation of teachers, insufficiently formed students’ skills of self-education and so on. These problems adversely influenced on the quality of future professionals in high school. Thus, there is need to implement blended learning in educational process for solving a number of problems. Thus, the model of the educational process with the use of blended learning is suggested, based on the analysis of the scientific literature in training future professionals, the results of international research papers and trainers. The interactions between elements of the model are established and their importance in the educational process as a whole is emphasized. The proposed model was tested during the learning process of the course "Methods and Technologies of Distance Learning" for students of Master’s degree, specialty "Computer Science" Faculty of Physics, Mathematics and Computer Science, Kherson State University.
Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures
International Nuclear Information System (INIS)
Mejia-Barbosa, Y.
2000-03-01
We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)
The perturbed angular correlation method - a modern technique in studying solids
International Nuclear Information System (INIS)
Unterricker, S.; Hunger, H.J.
1979-01-01
Starting from theoretical fundamentals the differential perturbed angular correlation method has been explained. By using the probe nucleus 111 Cd the magnetic dipole interaction in Fesub(x)Alsub(1-x) alloys and the electric quadrupole interaction in Cd have been measured. The perturbed angular correlation method is a modern nuclear measuring method and can be applied in studying ordering processes, phase transformations and radiation damages in metals, semiconductors and insulators
Cross-Correlation-Function-Based Multipath Mitigation Method for Sine-BOC Signals
Directory of Open Access Journals (Sweden)
H. H. Chen
2012-06-01
Full Text Available Global Navigation Satellite Systems (GNSS positioning accuracy indoor and urban canyons environments are greatly affected by multipath due to distortions in its autocorrelation function. In this paper, a cross-correlation function between the received sine phased Binary Offset Carrier (sine-BOC modulation signal and the local signal is studied firstly, and a new multipath mitigation method based on cross-correlation function for sine-BOC signal is proposed. This method is implemented to create a cross-correlation function by designing the modulated symbols of the local signal. The theoretical analysis and simulation results indicate that the proposed method exhibits better multipath mitigation performance compared with the traditional Double Delta Correlator (DDC techniques, especially the medium/long delay multipath signals, and it is also convenient and flexible to implement by using only one correlator, which is the case of low-cost mass-market receivers.
Fu, Shaotong
2018-01-01
In order to dilute the harmful smoke and dust of the working surface in the production of the roadway in the mine shaft, the combination of mine local ventilation is often considered when conditions permitting. However there is no definite method to determine the distance between fan and working surface. Considering the concentration of smoke, the size of working face, the ventilation time, the wind speed of the working face and the principle of suction volume, this paper analyzes the long - drawn short - pressure type of the ventilation scheme, and presents an optimal algorithm for the distance between the exhaust ventilator and the working face:{{L}}{{o}}=\\frac{{({{{v}}}{{f}}{{t}})}{{Z}}{{{S}}}{{o}}\\frac{\\exists }{{{z}}{{β }}}}{3.398{{{S}}}W\\frac{1}{{{z}}{{A}}}}. Then this paperpresentsa reference distance for different wind speed requirements of a project.
International Nuclear Information System (INIS)
Yoneda, Kazuhiro; Tonouchi, Shigemasa
1992-01-01
When the survey of the state of natural radiation distribution was carried out, for the purpose of examining the useful measuring method, the comparison of the γ-ray dose rate calculated from survey meter method, in-situ measuring method and the measuring method by sampling soil was carried out. Between the in-situ measuring method and the survey meter method, the correlation Y=0.986X+5.73, r=0.903, n=18, P<0.01 was obtained, and the high correlation having the inclination of nearly 1 was shown. Between the survey meter method and the measuring method by sampling soil, the correlation Y=1.297X-10.30, r=0.966, n=20 P<0.01 was obtained, and the high correlation was shown, but as for the dose rate contribution, the disparities of 36% in U series, 6% in Th series and 20% in K-40 were observed. For the survey of the state of natural radiation distribution, the method of using in combination the survey meter method and the in-situ measuring method or the measuring method by sampling soil is suitable. (author)
Energy Technology Data Exchange (ETDEWEB)
Buta, A [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; [Institute of Atomic Physics, Bucharest (Romania); Angelique, J C; Bizard, G; Brou, R; Cussol, D [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; Auger, G; Cabot, C [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Cassagnou, Y [CEA Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d` Astrophysique, de la Physique des Particules, de la Physique Nucleaire et de l` Instrumentation Associee; Crema, E [Caen Univ., 14 (France). Lab. de Physique Corpusculaire; [Sao Paulo Univ., SP (Brazil). Inst. de Fisica; El Masri, Y [Louvain Univ., Louvain-la-Neuve (Belgium). Unite de Physique Nucleaire; others, and
1996-09-01
Measuring the in-plane flow parameter appears to be a promising method to gain information on the equation of state of nuclear matter. A new method, based on particle-particle azimuthal correlations is proposed. This method does not require the knowledge of the reaction plane. The collisions Zn+Ni and Ar+Al are presented as an example. (K.A.).
DISTANCES TO DARK CLOUDS: COMPARING EXTINCTION DISTANCES TO MASER PARALLAX DISTANCES
International Nuclear Information System (INIS)
Foster, Jonathan B.; Jackson, James M.; Stead, Joseph J.; Hoare, Melvin G.; Benjamin, Robert A.
2012-01-01
We test two different methods of using near-infrared extinction to estimate distances to dark clouds in the first quadrant of the Galaxy using large near-infrared (Two Micron All Sky Survey and UKIRT Infrared Deep Sky Survey) surveys. Very long baseline interferometry parallax measurements of masers around massive young stars provide the most direct and bias-free measurement of the distance to these dark clouds. We compare the extinction distance estimates to these maser parallax distances. We also compare these distances to kinematic distances, including recent re-calibrations of the Galactic rotation curve. The extinction distance methods agree with the maser parallax distances (within the errors) between 66% and 100% of the time (depending on method and input survey) and between 85% and 100% of the time outside of the crowded Galactic center. Although the sample size is small, extinction distance methods reproduce maser parallax distances better than kinematic distances; furthermore, extinction distance methods do not suffer from the kinematic distance ambiguity. This validation gives us confidence that these extinction methods may be extended to additional dark clouds where maser parallaxes are not available.
The Moulded Site Data (MSD) wind correlation method: description and assessment
Energy Technology Data Exchange (ETDEWEB)
King, C.; Hurley, B.
2004-12-01
The long-term wind resource at a potential windfarm site may be estimated by correlating short-term on-site wind measurements with data from a regional meteorological station. A correlation method developed at Airtricity is described in sufficient detail to be reproduced. An assessment of its performance is also described; the results may serve as a guide to expected accuracy when using the method as part of an annual electricity production estimate for a proposed windfarm. (Author)
Mahalanobis Distance Based Iterative Closest Point
DEFF Research Database (Denmark)
Hansen, Mads Fogtmann; Blas, Morten Rufus; Larsen, Rasmus
2007-01-01
the notion of a mahalanobis distance map upon a point set with associated covariance matrices which in addition to providing correlation weighted distance implicitly provides a method for assigning correspondence during alignment. This distance map provides an easy formulation of the ICP problem that permits...... a fast optimization. Initially, the covariance matrices are set to the identity matrix, and all shapes are aligned to a randomly selected shape (equivalent to standard ICP). From this point the algorithm iterates between the steps: (a) obtain mean shape and new estimates of the covariance matrices from...... the aligned shapes, (b) align shapes to the mean shape. Three different methods for estimating the mean shape with associated covariance matrices are explored in the paper. The proposed methods are validated experimentally on two separate datasets (IMM face dataset and femur-bones). The superiority of ICP...
ORDERED WEIGHTED DISTANCE MEASURE
Institute of Scientific and Technical Information of China (English)
Zeshui XU; Jian CHEN
2008-01-01
The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.
Experimental study on reactivity measurement in thermal reactor by polarity correlation method
International Nuclear Information System (INIS)
Yasuda, Hideshi
1977-11-01
Experimental study on the polarity correlation method for measuring the reactivity of a thermal reactor, especially the one possessing long prompt neutron lifetime such as graphite on heavy water moderated core, is reported. The techniques of reactor kinetics experiment are briefly reviewed, which are classified in two groups, one characterized by artificial disturbance to a reactor and the other by natural fluctuation inherent in a reactor. The fluctuation phenomena of neutron count rate are explained using F. de Hoffman's stochastic method, and correlation functions for the neutron count rate fluctuation are shown. The experimental results by polarity correlation method applied to the β/l measurements in both graphite-moderated SHE core and light water-moderated JMTRC and JRR-4 cores, and also to the measurement of SHE shut down reactivity margin are presented. The measured values were in good agreement with those by a pulsed neutron method in the reactivity range from critical to -12 dollars. The conditional polarity correlation experiments in SHE at -20 cent and -100 cent are demonstrated. The prompt neutron decay constants agreed with those obtained by the polarity correlation experiments. The results of experiments measuring large negative reactivity of -52 dollars of SHE by pulsed neutron, rod drop and source multiplication methods are given. Also it is concluded that the polarity and conditional polarity correlation methods are sufficiently applicable to noise analysis of a low power thermal reactor with long prompt neutron lifetime. (Nakai, Y.)
Wimmers, Paul F; Fung, Cha-Chi
2008-06-01
The finding of case or content specificity in medical problem solving moved the focus of research away from generalisable skills towards the importance of content knowledge. However, controversy about the content dependency of clinical performance and the generalisability of skills remains. This study aimed to explore the relative impact of both perspectives (case specificity and generalisable skills) on different components (history taking, physical examination, communication) of clinical performance within and across cases. Data from a clinical performance examination (CPX) taken by 350 Year 3 students were used in a correlated traits-correlated methods (CTCM) approach using confirmatory factor analysis, whereby 'traits' refers to generalisable skills and 'methods' to individual cases. The baseline CTCM model was analysed and compared with four nested models using structural equation modelling techniques. The CPX consisted of three skills components and five cases. Comparison of the four different models with the least-restricted baseline CTCM model revealed that a model with uncorrelated generalisable skills factors and correlated case-specific knowledge factors represented the data best. The generalisable processes found in history taking, physical examination and communication were responsible for half the explained variance, in comparison with the variance related to case specificity. Conclusions Pure knowledge-based and pure skill-based perspectives on clinical performance both seem too one-dimensional and new evidence supports the idea that a substantial amount of variance contributes to both aspects of performance. It could be concluded that generalisable skills and specialised knowledge go hand in hand: both are essential aspects of clinical performance.
A general method dealing with correlations in uncertainty propagation in fault trees
International Nuclear Information System (INIS)
Qin Zhang
1989-01-01
This paper deals with the correlations among the failure probabilities (frequencies) of not only the identical basic events but also other basic events in a fault tree. It presents a general and simple method to include these correlations in uncertainty propagation. Two examples illustrate this method and show that neglecting these correlations results in large underestimation of the top event failure probability (frequency). One is the failure of the primary pump in a chemical reactor cooling system, the other example is an accident to a road transport truck carrying toxic waste. (author)
Directory of Open Access Journals (Sweden)
Katarina Pucelj
2006-12-01
Full Text Available I would like to underline the role and importance of knowledge, which is acquired by individuals as a result of a learning process and experience. I have established that a form of learning, such as distance learning definitely contributes to a higher learning quality and leads to innovative, dynamic and knowledgebased society. Knowledge and skills enable individuals to cope with and manage changes, solve problems and also create new knowledge. Traditional learning practices face new circumstances, new and modern technologies appear, which enable quick and quality-oriented knowledge implementation. The centre of learning process at distance learning is to increase the quality of life of citizens, their competitiveness on the workforce market and ensure higher economic growth. Intellectual capital is the one, which represents the biggest capital of each society and knowledge is the key factor for succes of everybody, who are fully aware of this. Flexibility, openness and willingness of people to follow new IT solutions form suitable environment for developing and deciding to take up distance learning.
Matrix elements and few-body calculations within the unitary correlation operator method
International Nuclear Information System (INIS)
Roth, R.; Hergert, H.; Papakonstantinou, P.
2005-01-01
We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges. (orig.)
Matrix elements and few-body calculations within the unitary correlation operator method
International Nuclear Information System (INIS)
Roth, R.; Hergert, H.; Papakonstantinou, P.; Neff, T.; Feldmeier, H.
2005-01-01
We employ the unitary correlation operator method (UCOM) to construct correlated, low-momentum matrix elements of realistic nucleon-nucleon interactions. The dominant short-range central and tensor correlations induced by the interaction are included explicitly by an unitary transformation. Using correlated momentum-space matrix elements of the Argonne V18 potential, we show that the unitary transformation eliminates the strong off-diagonal contributions caused by the short-range repulsion and the tensor interaction and leaves a correlated interaction dominated by low-momentum contributions. We use correlated harmonic oscillator matrix elements as input for no-core shell model calculations for few-nucleon systems. Compared to the bare interaction, the convergence properties are dramatically improved. The bulk of the binding energy can already be obtained in very small model spaces or even with a single Slater determinant. Residual long-range correlations, not treated explicitly by the unitary transformation, can easily be described in model spaces of moderate size allowing for fast convergence. By varying the range of the tensor correlator we are able to map out the Tjon line and can in turn constrain the optimal correlator ranges
Ghanbarzadeh, Mitra; Aminghafari, Mina
2015-05-01
This article studies the prediction of periodically correlated process using wavelet transform and multivariate methods with applications to climatological data. Periodically correlated processes can be reformulated as multivariate stationary processes. Considering this fact, two new prediction methods are proposed. In the first method, we use stepwise regression between the principal components of the multivariate stationary process and past wavelet coefficients of the process to get a prediction. In the second method, we propose its multivariate version without principal component analysis a priori. Also, we study a generalization of the prediction methods dealing with a deterministic trend using exponential smoothing. Finally, we illustrate the performance of the proposed methods on simulated and real climatological data (ozone amounts, flows of a river, solar radiation, and sea levels) compared with the multivariate autoregressive model. The proposed methods give good results as we expected.
Giant HII regions as distance indicators
International Nuclear Information System (INIS)
Melnick, Jorge; Terlevich, Robert; Moles, Mariano
1987-01-01
The correlations between the integrated Hβ luminosities, the velocity widths of the nebular lines and the metallicities of giant HII regions and HII galaxies are demonstrated to provide powerful distance indicators. They are calibrated on a homogeneous sample of giant HII regions with well determined distances and applied to distant HII galaxies to obtain a value of H 0 =95+-10 for the Hubble parameter, consistent with the value obtained by the Tully-Fisher technique. The effect of Malmquist bias and other systematic effects on the HII region method are discussed in detail. (Author)
International Nuclear Information System (INIS)
Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E
2014-01-01
In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit
Libraries for spectrum identification: Method of normalized coordinates versus linear correlation
International Nuclear Information System (INIS)
Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.
2008-01-01
In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification
Xu, Lianyun; Hou, Zhende; Qin, Yuwen
2002-05-01
Because some composite material, thin film material, and biomaterial, are very thin and some of them are flexible, the classical methods for measuring their Young's moduli, by mounting extensometers on specimens, are not available. A bi-image method based on image correlation for measuring Young's moduli is developed in this paper. The measuring precision achieved is one order enhanced with general digital image correlation or called single image method. By this way, the Young's modulus of a SS301 stainless steel thin tape, with thickness 0.067mm, is measured, and the moduli of polyester fiber films, a kind of flexible sheet with thickness 0.25 mm, are also measured.
A cross-correlation method to search for gravitational wave bursts with AURIGA and Virgo
Bignotto, M.; Bonaldi, M.; Camarda, M.; Cerdonio, M.; Conti, L.; Drago, M.; Falferi, P.; Liguori, N.; Longo, S.; Mezzena, R.; Mion, A.; Ortolan, A.; Prodi, G. A.; Re, V.; Salemi, F.; Taffarello, L.; Vedovato, G.; Vinante, A.; Vitale, S.; Zendri, J. -P.; Acernese, F.; Alshourbagy, Mohamed; Amico, Paolo; Antonucci, Federica; Aoudia, S.; Astone, P.; Avino, Saverio; Baggio, L.; Ballardin, G.; Barone, F.; Barsotti, L.; Barsuglia, M.; Bauer, Th. S.; Bigotta, Stefano; Birindelli, Simona; Boccara, Albert-Claude; Bondu, F.; Bosi, Leone; Braccini, Stefano; Bradaschia, C.; Brillet, A.; Brisson, V.; Buskulic, D.; Cagnoli, G.; Calloni, E.; Campagna, Enrico; Carbognani, F.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cesarini, E.; Chassande-Mottin, E.; Clapson, A-C; Cleva, F.; Coccia, E.; Corda, C.; Corsi, A.; Cottone, F.; Coulon, J. -P.; Cuoco, E.; D'Antonio, S.; Dari, A.; Dattilo, V.; Davier, M.; Rosa, R.; Del Prete, M.; Di Fiore, L.; Di Lieto, A.; Emilio, M. Di Paolo; Di Virgilio, A.; Evans, M.; Fafone, V.; Ferrante, I.; Fidecaro, F.; Fiori, I.; Flaminio, R.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Gammaitoni, L.; Garufi, F.; Genin, E.; Gennai, A.; Giazotto, A.; Giordano, L.; Granata, V.; Greverie, C.; Grosjean, D.; Guidi, G.; Hamdani, S.U.; Hebri, S.; Heitmann, H.; Hello, P.; Huet, D.; Kreckelbergh, S.; La Penna, P.; Laval, M.; Leroy, N.; Letendre, N.; Lopez, B.; Lorenzini, M.; Loriette, V.; Losurdo, G.; Mackowski, J. -M.; Majorana, E.; Man, C. N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marque, J.; Martelli, F.; Masserot, A.; Menzinger, F.; Milano, L.; Minenkov, Y.; Moins, C.; Moreau, J.; Morgado, N.; Mosca, S.; Mours, B.; Neri, I.; Nocera, F.; Pagliaroli, G.; Palomba, C.; Paoletti, F.; Pardi, S.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Piergiovanni, F.; Pinard, L.; Poggiani, R.; Punturo, M.; Puppo, P.; Rapagnani, P.; Regimbau, T.; Remillieux, A.; Ricci, F.; Ricciardi, I.; Rocchi, A.; Rolland, L.; Romano, R.; Ruggi, P.; Russo, G.; Solimeno, S.; Spallicci, A.; Swinkels, B. L.; Tarallo, M.; Terenzi, R.; Toncelli, A.; Tonelli, M.; Tournefier, E.; Travasso, F.; Vajente, G.; van den Brand, J. F. J.; van der Putten, S.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinet, J. -Y.; Vocca, H.; Yvert, M.
2008-01-01
We present a method to search for transient gravitational waves using a network of detectors with different spectral and directional sensitivities: the interferometer Virgo and the bar detector AURIGA. The data analysis method is based on the measurements of the correlated energy in the network by
The systematic error of temperature noise correlation measurement method and self-calibration
International Nuclear Information System (INIS)
Tian Hong; Tong Yunxian
1993-04-01
The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments
Starosta, K.; Dewald, A.; Dunomes, A.; Adrich, P.; Amthor, A. M.; Baumann, T.; Bazin, D.; Bowen, M.; Brown, B. A.; Chester, A.; Gade, A.; Galaviz, D.; Glasmacher, T.; Ginter, T.; Hausmann, M.
2007-01-01
Transition rate measurements are reported for the first and the second 2+ states in N=Z 64Ge. The experimental results are in excellent agreement with large-scale Shell Model calculations applying the recently developed GXPF1A interactions. Theoretical analysis suggests that 64Ge is a collective gamma-soft anharmonic vibrator. The measurement was done using the Recoil Distance Method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Labor...
Joh, C.H.; Arentze, T.A.; Timmermans, H.J.P.
2001-01-01
The application of a multidimensional sequence alignment method for classifying activity travel patterns is reported. The method was developed as an alternative to the existing classification methods suggested in the transportation literature. The relevance of the multidimensional sequence alignment
Valassi, A
2014-01-01
We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that the correlations provided as input to BLUE combinations need to be assessed with extreme ca...
Two-Way Gene Interaction From Microarray Data Based on Correlation Methods.
Alavi Majd, Hamid; Talebi, Atefeh; Gilany, Kambiz; Khayyer, Nasibeh
2016-06-01
Gene networks have generated a massive explosion in the development of high-throughput techniques for monitoring various aspects of gene activity. Networks offer a natural way to model interactions between genes, and extracting gene network information from high-throughput genomic data is an important and difficult task. The purpose of this study is to construct a two-way gene network based on parametric and nonparametric correlation coefficients. The first step in constructing a Gene Co-expression Network is to score all pairs of gene vectors. The second step is to select a score threshold and connect all gene pairs whose scores exceed this value. In the foundation-application study, we constructed two-way gene networks using nonparametric methods, such as Spearman's rank correlation coefficient and Blomqvist's measure, and compared them with Pearson's correlation coefficient. We surveyed six genes of venous thrombosis disease, made a matrix entry representing the score for the corresponding gene pair, and obtained two-way interactions using Pearson's correlation, Spearman's rank correlation, and Blomqvist's coefficient. Finally, these methods were compared with Cytoscape, based on BIND, and Gene Ontology, based on molecular function visual methods; R software version 3.2 and Bioconductor were used to perform these methods. Based on the Pearson and Spearman correlations, the results were the same and were confirmed by Cytoscape and GO visual methods; however, Blomqvist's coefficient was not confirmed by visual methods. Some results of the correlation coefficients are not the same with visualization. The reason may be due to the small number of data.
An improved correlated sampling method for calculating correction factor of detector
International Nuclear Information System (INIS)
Wu Zhen; Li Junli; Cheng Jianping
2006-01-01
In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)
International Nuclear Information System (INIS)
Giraud, B.G.; Heumann, J.M.; Lapedes, A.S.
1999-01-01
The fact that correlation does not imply causation is well known. Correlation between variables at two sites does not imply that the two sites directly interact, because, e.g., correlation between distant sites may be induced by chaining of correlation between a set of intervening, directly interacting sites. Such 'noncausal correlation' is well understood in statistical physics: an example is long-range order in spin systems, where spins which have only short-range direct interactions, e.g., the Ising model, display correlation at a distance. It is less well recognized that such long-range 'noncausal' correlations can in fact be stronger than the magnitude of any causal correlation induced by direct interactions. We call this phenomenon superadditive correlation (SAC). We demonstrate this counterintuitive phenomenon by explicit examples in (i) a model spin system and (ii) a model continuous variable system, where both models are such that two variables have multiple intervening pathways of indirect interaction. We apply the technique known as decimation to explain SAC as an additive, constructive interference phenomenon between the multiple pathways of indirect interaction. We also explain the effect using a definition of the collective mode describing the intervening spin variables. Finally, we show that the SAC effect is mirrored in information theory, and is true for mutual information measures in addition to correlation measures. Generic complex systems typically exhibit multiple pathways of indirect interaction, making SAC a potentially widespread phenomenon. This affects, e.g., attempts to deduce interactions by examination of correlations, as well as, e.g., hierarchical approximation methods for multivariate probability distributions, which introduce parameters based on successive orders of correlation. copyright 1999 The American Physical Society
Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li
2016-05-15
Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Mohamed Mehana
2016-06-01
Full Text Available The development of shale reservoirs has brought a paradigm shift in the worldwide energy equation. This entails developing robust techniques to properly evaluate and unlock the potential of those reservoirs. The application of Nuclear Magnetic Resonance techniques in fluid typing and properties estimation is well-developed in conventional reservoirs. However, Shale reservoirs characteristics like pore size, organic matter, clay content, wettability, adsorption, and mineralogy would limit the applicability of the used interpretation methods and correlation. Some of these limitations include the inapplicability of the controlling equations that were derived assuming fast relaxation regime, the overlap of different fluids peaks and the lack of robust correlation to estimate fluid properties in shale. This study presents a state-of-the-art review of the main contributions presented on fluid typing methods and correlations in both experimental and theoretical side. The study involves Dual Tw, Dual Te, and doping agent's application, T1-T2, D-T2 and T2sec vs. T1/T2 methods. In addition, fluid properties estimation such as density, viscosity and the gas-oil ratio is discussed. This study investigates the applicability of these methods along with a study of the current fluid properties correlations and their limitations. Moreover, it recommends the appropriate method and correlation which are capable of tackling shale heterogeneity.
Method of correlation operators in the theory of a system of particles with strong interactions
International Nuclear Information System (INIS)
Kuz'min, Y.M.
1985-01-01
A similarity transformation of the density matrix is performed with the help of the correlation operator. This does not change the value of the partition function. A method of calculating the transformed partition function with the help of a finite translation operator is given. A general system of coupled equations is obtained from which the matrix elements of correlation operators of increasing order can be found
Application of the spectral-correlation method for diagnostics of cellulose paper
Kiesewetter, D.; Malyugin, V.; Reznik, A.; Yudin, A.; Zhuravleva, N.
2017-11-01
The spectral-correlation method was described for diagnostics of optically inhomogeneous biological objects and materials of natural origin. The interrelation between parameters of the studied objects and parameters of the cross correlation function of speckle patterns produced by scattering of coherent light at different wavelengths is shown for thickness, optical density and internal structure of the material. A detailed study was performed for cellulose electric insulating paper with different parameters.
Gupta, Tulika; Rajeshkumar, Thayalan; Rajaraman, Gopalan
2014-07-28
Density functional studies have been performed on ten different {Gd(III)-radical} complexes exhibiting both ferro and antiferromagnetic exchange interaction with an aim to assess a suitable exchange-correlation functional within DFT formalism. This study has also been extended to probe the mechanism of magnetic coupling and to develop suitable magneto-structural correlations for this pair. Our method assessments reveal the following order of increasing accuracy for the evaluation of J values compared to experimental coupling constants: B(40HF)LYP X3LYP < B3LYP < B2PLYP. Grimme's double-hybrid functional is found to be superior compared to other functionals tested and this is followed very closely by the conventional hybrid B3LYP functional. At the basis set front, our calculations reveal that the incorporation of relativistic effect is important in these calculations and the relativistically corrected effective core potential (ECP) basis set is found to yield better Js compared to other methods. The supposedly empty 5d/6s/6p orbitals of Gd(III) are found to play an important role in the mechanism of magnetic coupling and different contributions to the exchange terms are probed using Molecular Orbital (MO) and Natural Bond Orbital (NBO) analysis. Magneto-structural correlations for Gd-O distances, Gd-O-N angles and Gd-O-N-C dihedral angles are developed where the bond angles as well as dihedral angle parameters are found to dictate the sign and strength of the magnetic coupling in this series.
Motivation in Distance Leaming
Directory of Open Access Journals (Sweden)
Daniela Brečko
1996-12-01
Full Text Available It is estimated that motivation is one of the most important psychological functions making it possible for people to leam even in conditions that do not meet their needs. In distance learning, a form of autonomous learning, motivation is of outmost importance. When adopting this method in learning an individual has to stimulate himself and take learning decisions on his or her own. These specific characteristics of distance learning should be taken into account. This all different factors maintaining the motivation of participants in distance learning are to be included. Moreover, motivation in distance learning can be stimulated with specific learning materials, clear instructions and guide-lines, an efficient feed back, personal contact between tutors and participants, stimulating learning letters, telephone calls, encouraging letters and through maintaining a positive relationship between tutor and participant.
International Nuclear Information System (INIS)
Honda, Kazuya; Uehara, Tamotsu; Arai, Yoshinori; Kashima, Masahiro; Tsukimura, Naoki; Honda, Masahiko; Iwai, Kazuo; Terakado, Masaaki; Shinoda, Koji
2002-01-01
A study was conducted to evaluate the efficacy of pumping manipulation treatment for closed lock of the temporomandibular joint (TMJ) disorder using limited cone beam X-ray CT for dental use. The subjects were 20 patients with TMJ closed lock. Arthrography and pumping manipulation treatment were performed, and the correlation between maximal mouth opening and arthrographic findings was examined. Arthrography showed 16 cases of anterior disk displacement, and 4 cases of sideways displacement. Disk configuration showed 15 abnormal cases and 3 cases of disk perforation. Before treatment, mouth opening distance was 24.2 mm and 1 week after treatment it was 34.4 mm. After 3 months this had improved significantly to 41.0 mm. Comparison of mouth opening distance with arthrographic findings showed that disk perforation was significantly different after 3 months. These results suggest that pumping manipulation treatment might be useful in patients with TMJ closed lock without internal derangement or disk perforation. (author)
Yu, Yifei; Luo, Linqing; Li, Bo; Guo, Linfeng; Yan, Jize; Soga, Kenichi
2015-10-01
The measured distance error caused by double peaks in the BOTDRs (Brillouin optical time domain reflectometers) system is a kind of Brillouin scattering spectrum (BSS) deformation, discussed and simulated for the first time in the paper, to the best of the authors' knowledge. Double peak, as a kind of Brillouin spectrum deformation, is important in the enhancement of spatial resolution, measurement accuracy, and crack detection. Due to the variances of the peak powers of the BSS along the fiber, the measured starting point of a step-shape frequency transition region is shifted and results in distance errors. Zero-padded short-time-Fourier-transform (STFT) can restore the transition-induced double peaks in the asymmetric and deformed BSS, thus offering more accurate and quicker measurements than the conventional Lorentz-fitting method. The recovering method based on the double-peak detection and corresponding BSS deformation can be applied to calculate the real starting point, which can improve the distance accuracy of the STFT-based BOTDR system.
A New Wavelet Threshold Determination Method Considering Interscale Correlation in Signal Denoising
Directory of Open Access Journals (Sweden)
Can He
2015-01-01
Full Text Available Due to simple calculation and good denoising effect, wavelet threshold denoising method has been widely used in signal denoising. In this method, the threshold is an important parameter that affects the denoising effect. In order to improve the denoising effect of the existing methods, a new threshold considering interscale correlation is presented. Firstly, a new correlation index is proposed based on the propagation characteristics of the wavelet coefficients. Then, a threshold determination strategy is obtained using the new index. At the end of the paper, a simulation experiment is given to verify the effectiveness of the proposed method. In the experiment, four benchmark signals are used as test signals. Simulation results show that the proposed method can achieve a good denoising effect under various signal types, noise intensities, and thresholding functions.
The multiphonon method as a dynamical approach to octupole correlations in deformed nuclei
International Nuclear Information System (INIS)
Piepenbring, R.
1986-09-01
The octupole correlations in nuclei are studied within the framework of the multiphonon method which is mainly the exact diagonalization of the total Hamiltonian in the space spanned by collective phonons. This treatment takes properly into account the Pauli principle. It is a microscopic approach based on a reflection symmetry of the potential. The spectroscopic properties of double even and odd-mass nuclei are nicely reproduced. The multiphonon method appears as a dynamical approach to octupole correlations in nuclei which can be compared to other models based on stable octupole deformation. 66 refs
Waller, Niels G
2016-01-01
For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.
A Bayes linear Bayes method for estimation of correlated event rates.
Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim
2013-12-01
Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.
Restoring method for missing data of spatial structural stress monitoring based on correlation
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
Reliability analysis based on a novel density estimation method for structures with correlations
Directory of Open Access Journals (Sweden)
Baoyu LI
2017-06-01
Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.
A DATA FIELD METHOD FOR URBAN REMOTELY SENSED IMAGERY CLASSIFICATION CONSIDERING SPATIAL CORRELATION
Directory of Open Access Journals (Sweden)
Y. Zhang
2016-06-01
Full Text Available Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary’s C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM for the classification of multi-features (e.g. the spectral feature and spatial correlation feature. In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.
Iritani, Takumi
2018-03-01
Both direct and HAL QCD methods are currently used to study the hadron interactions in lattice QCD. In the direct method, the eigen-energy of two-particle is measured from the temporal correlation. Due to the contamination of excited states, however, the direct method suffers from the fake eigen-energy problem, which we call the "mirage problem," while the HAL QCD method can extract information from all elastic states by using the spatial correlation. In this work, we further investigate systematic uncertainties of the HAL QCD method such as the quark source operator dependence, the convergence of the derivative expansion of the non-local interaction kernel, and the single baryon saturation, which are found to be well controlled. We also confirm the consistency between the HAL QCD method and the Lüscher's finite volume formula. Based on the HAL QCD potential, we quantitatively confirm that the mirage plateau in the direct method is indeed caused by the contamination of excited states.
International Nuclear Information System (INIS)
Lissillour, R.; Guerillot, C.R.
1975-01-01
The self-correlated field method is based on the insertion in the group product wave function of pair functions built upon a set of correlated ''local'' functions and of ''nonlocal'' functions. This work is an application to three-electron systems. The effects of the outer electron on the inner pair are studied. The total electronic energy and some intermediary results such as pair energies, Coulomb and exchange ''correlated'' integrals, are given. The results are always better than those given by conventional SCF computations and reach the same level of accuracy as those given by more laborious methods used in correlation studies. (auth)
Research on criticality analysis method of CNC machine tools components under fault rate correlation
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Determination of velocity vector angles using the directional cross-correlation method
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt
2005-01-01
and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array transducer is used......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions......-time ) between signals to correlate, and a proper choice varies with flow angle and flow velocity. One performance example is given with a fixed value of k tprf for all flow angles. The angle estimation on measured data for flow at 60 ◦ to 90 ◦ , yields a probability of valid estimates between 68% and 98...
Dual linear structured support vector machine tracking method via scale correlation filter
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
International Nuclear Information System (INIS)
Tan, Cheng-Yang; Fermilab
2006-01-01
One common way for measuring the emittance of an electron beam is with the slits method. The usual approach for analyzing the data is to calculate an emittance that is a subset of the parent emittance. This paper shows an alternative way by using the method of correlations which ties the parameters derived from the beamlets to the actual parameters of the parent emittance. For parent distributions that are Gaussian, this method yields exact results. For non-Gaussian beam distributions, this method yields an effective emittance that can serve as a yardstick for emittance comparisons
Directory of Open Access Journals (Sweden)
Lei Zeng
2016-01-01
Full Text Available Cone beam computed tomography (CBCT is a new detection method for 3D nondestructive testing of printed circuit boards (PCBs. However, the obtained 3D image of PCBs exhibits low contrast because of several factors, such as the occurrence of metal artifacts and beam hardening, during the process of CBCT imaging. Histogram equalization (HE algorithms cannot effectively extend the gray difference between a substrate and a metal in 3D CT images of PCBs, and the reinforcing effects are insignificant. To address this shortcoming, this study proposes an image enhancement algorithm based on gray and its distance double-weighting HE. Considering the characteristics of 3D CT images of PCBs, the proposed algorithm uses gray and its distance double-weighting strategy to change the form of the original image histogram distribution, suppresses the grayscale of a nonmetallic substrate, and expands the grayscale of wires and other metals. The proposed algorithm also enhances the gray difference between a substrate and a metal and highlights metallic materials. The proposed algorithm can enhance the gray value of wires and other metals in 3D CT images of PCBs. It applies enhancement strategies of changing gray and its distance double-weighting mechanism to adapt to this particular purpose. The flexibility and advantages of the proposed algorithm are confirmed by analyses and experimental results.
System reliability with correlated components: Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing
System reliability with correlated components : Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing
On the boundary conditions and optimization methods in integrated digital image correlation
Kleinendorst, S.M.; Verhaegh, B.J.; Hoefnagels, J.P.M.; Ruybalid, A.; van der Sluis, O.; Geers, M.G.D.; Lamberti, L.; Lin, M.-T.; Furlong, C.; Sciammarella, C.
2018-01-01
In integrated digital image correlation (IDIC) methods attention must be paid to the influence of using a correct geometric and material model, but also to make the boundary conditions in the FE simulation match the real experiment. Another issue is the robustness and convergence of the IDIC
Yarlagadda, Anuradha; Murthy, J.V.R.; Krishna Prasad, M.H.M.
2015-01-01
In the computer vision community, easy categorization of a person’s facial image into various age groups is often quite precise and is not pursued effectively. To address this problem, which is an important area of research, the present paper proposes an innovative method of age group classification system based on the Correlation Fractal Dimension of complex facial image. Wrinkles appear on the face with aging thereby changing the facial edges of the image. The proposed method is rotation an...
Improvement of the accuracy of noise measurements by the two-amplifier correlation method.
Pellegrini, B; Basso, G; Fiori, G; Macucci, M; Maione, I A; Marconcini, P
2013-10-01
We present a novel method for device noise measurement, based on a two-channel cross-correlation technique and a direct "in situ" measurement of the transimpedance of the device under test (DUT), which allows improved accuracy with respect to what is available in the literature, in particular when the DUT is a nonlinear device. Detailed analytical expressions for the total residual noise are derived, and an experimental investigation of the increased accuracy provided by the method is performed.
Directory of Open Access Journals (Sweden)
Shumanova M.V.
2015-03-01
Full Text Available The process fish salting has been studied by the method of photon correlation spectroscopy; the distribution of salt concentration in the solution and herring flesh with skin has been found, diffusion coefficients and salt concentrations used for creating a mathematical model of the salting technology have been worked out; the possibility of determination by this method the coefficient of dynamic viscosity of solutions and different media (minced meat etc. has been considered
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
International Nuclear Information System (INIS)
Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-01-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
Energy Technology Data Exchange (ETDEWEB)
Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-08-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0
Wang Hao; Gao Wen; Huang Qingming; Zhao Feng
2010-01-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matchin...
International Nuclear Information System (INIS)
Pastore, J; Moler, E; Ballarin, V
2007-01-01
To quantify the efficiency of a segmentation method, it is necessary to do some validation experiments, consisting generally in comparing the result obtained against the expected result. The most direct method for validation is the comparison of a simple visual inspection between the automatic segmentation and a segmentation obtained manually by a specialist, but this method does not guarantee robustness. This work presents a new similarity parameter between a segmented object and a control object, that combines a measurement of spatial similarity through the Hausdorff metrics and the difference in the contour areas based on the symmetric difference between sets
International Nuclear Information System (INIS)
Wu Shengxing; Chen Xudong; Zhou Jikai
2012-01-01
Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
Chen, Zhiwen
2017-01-01
Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...
DEFF Research Database (Denmark)
Ohm-Laursen, Line; Barington, Torben
2007-01-01
-23*01) from blood B lymphocytes enriched for CD27-positive memory cells. Analyses of 6,912 unique, unselected substitutions showed that in vivo hot and cold spots for the SHM of C and G residues corresponded closely to the target preferences reported for AID in vitro. A detailed analysis of all possible four......-nucleotide motifs present on both strands of the V(H) gene showed significant correlations between the substitution frequencies in reverse complementary motifs, suggesting that the SHM machinery targets both strands equally well. An analysis of individual J(H) and D gene segments showed that the substitution...... rates in G and T residues correlated inversely with the distance to the nearest 3' WRC AID hot spot motif on both the nontranscribed and transcribed strands. This suggests that phase II SHM takes place 5' of the initial AID deamination target and primarily targets T and G residues or, alternatively...
International Nuclear Information System (INIS)
Fukuda, Yoshiyuki; Schrod, Nikolas; Schaffer, Miroslava; Feng, Li Rebekah; Baumeister, Wolfgang; Lucic, Vladan
2014-01-01
Correlative microscopy allows imaging of the same feature over multiple length scales, combining light microscopy with high resolution information provided by electron microscopy. We demonstrate two procedures for coordinate transformation based correlative microscopy of vitrified biological samples applicable to different imaging modes. The first procedure aims at navigating cryo-electron tomography to cellular regions identified by fluorescent labels. The second procedure, allowing navigation of focused ion beam milling to fluorescently labeled molecules, is based on the introduction of an intermediate scanning electron microscopy imaging step to overcome the large difference between cryo-light microscopy and focused ion beam imaging modes. These methods make it possible to image fluorescently labeled macromolecular complexes in their natural environments by cryo-electron tomography, while minimizing exposure to the electron beam during the search for features of interest. - Highlights: • Correlative light microscopy and focused ion beam milling of vitrified samples. • Coordinate transformation based cryo-correlative method. • Improved correlative light microscopy and cryo-electron tomography
1991-03-21
discussion of spectral factorability and motivations for broadband analysis, the report is subdivided into four main sections. In Section 1.0, we...estimates. The motivation for developing our multi-channel deconvolution method was to gain information about seismic sources, most notably, nuclear...with complex constraints for estimating the rupture history. Such methods (applied mostly to data sets that also include strong rmotion data), were
Cortesi, Nicola; Peña-Angulo, Dhais; Simolo, Claudia; Stepanek, Peter; Brunetti, Michele; Gonzalez-Hidalgo, José Carlos
2014-05-01
One of the key point in the develop of the MOTEDAS dataset (see Poster 1 MOTEDAS) in the framework of the HIDROCAES Project (Impactos Hidrológicos del Calentamiento Global en España, Spanish Ministery of Research CGL2011-27574-C02-01) is the reference series for which no generalized metadata exist. In this poster we present an analysis of spatial variability of monthly minimum and maximum temperatures in the conterminous land of Spain (Iberian Peninsula, IP), by using the Correlation Decay Distance function (CDD), with the aim of evaluating, at sub-regional level, the optimal threshold distance between neighbouring stations for producing the set of reference series used in the quality control (see MOTEDAS Poster 1) and the reconstruction (see MOREDAS Poster 3). The CDD analysis for Tmax and Tmin was performed calculating a correlation matrix at monthly scale between 1981-2010 among monthly mean values of maximum (Tmax) and minimum (Tmin) temperature series (with at least 90% of data), free of anomalous data and homogenized (see MOTEDAS Poster 1), obtained from AEMEt archives (National Spanish Meteorological Agency). Monthly anomalies (difference between data and mean 1981-2010) were used to prevent the dominant effect of annual cycle in the CDD annual estimation. For each station, and time scale, the common variance r2 (using the square of Pearson's correlation coefficient) was calculated between all neighbouring temperature series and the relation between r2 and distance was modelled according to the following equation (1): Log (r2ij) = b*°dij (1) being Log(rij2) the common variance between target (i) and neighbouring series (j), dij the distance between them and b the slope of the ordinary least-squares linear regression model applied taking into account only the surrounding stations within a starting radius of 50 km and with a minimum of 5 stations required. Finally, monthly, seasonal and annual CDD values were interpolated using the Ordinary Kriging with a
Starosta, K.; Dewald, A.; Dunomes, A.; Adrich, P.; Amthor, A. M.; Baumann, T.; Bazin, D.; Bowen, M.; Brown, B. A.; Chester, A.; Gade, A.; Galaviz, D.; Glasmacher, T.; Ginter, T.; Hausmann, M.; Horoi, M.; Jolie, J.; Melon, B.; Miller, D.; Moeller, V.; Norris, R. P.; Pissulla, T.; Portillo, M.; Rother, W.; Shimbara, Y.; Stolz, A.; Vaman, C.; Voss, P.; Weisshaar, D.; Zelevinsky, V.
2007-07-01
Transition rate measurements are reported for the 21+ and 22+ states in N=Z Ge64. The experimental results are in excellent agreement with large-scale shell-model calculations applying the recently developed GXPF1A interactions. The measurement was done using the recoil distance method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate-energy single-neutron knockout reaction. RDM studies of knockout and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for excited states in a wide range of nuclei.
International Nuclear Information System (INIS)
Chun, Moon Hyun; Oh, Jae Guen
1989-01-01
Ten methods of the total two-phase pressure drop prediction based on five existing models and correlations have been examined for their accuracy and applicability to pressurized water reactor conditions. These methods were tested against 209 experimental data of local and bulk boiling conditions: Each correlations were evaluated for different ranges of pressure, mass velocity and quality, and best performing models were identified for each data subsets. A computer code entitled 'K-TWOPD' has been developed to calculate the total two phase pressure drop using the best performing existing correlations for a specific property range and a correction factor to compensate for the predicted error of the selected correlations. Assessment of this code shows that the present method fits all the available data within ±11% at a 95% confidence level compared with ± 25% for the existing correlations. (Author)
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
International Nuclear Information System (INIS)
Fiebig, H. Rudolf
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach
Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.
2017-12-01
This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.
International Nuclear Information System (INIS)
Basovets, S.K.; Krupyanskij, Yu.F.; Kurinov, I.V.; Suzdalev, I.P.; Goldanskij, V.I.; Uporov, I.V.; Shaitan, K.V.; Rubin, A.B.
1988-01-01
A method of Moessbauer Fourier spectroscopy is developed to determine the correlation function of coordinates of a macromolecular system. The method does not require the use of an a priori dynamic model. The application of the method to the analysis of RSMR data for human serum albumin has demonstrated considerable changes in the dynamic behavior of the protein globule when the temperature is changed from 270 to 310 K. The main conclusions of the present work is the simultaneous observation of low-frequency (τ≥10 -9 sec) and high-frequency (τ -9 sec) large-scaled motions, that is the two-humped distribution of correlation times of protein motions. (orig.)
Determining average yarding distance.
Roger H. Twito; Charles N. Mann
1979-01-01
Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...
DEFF Research Database (Denmark)
Pedersen, Knud Ole Helgesen
1999-01-01
A method for implementing a digital distance relay in the power system is described.Instructions are given on how to program this relay on a 80537 based microcomputer system.The problem is used as a practical case study in the course 53113: Micocomputer applications in the power system.The relay...
Mathematical correlation of modal-parameter-identification methods via system-realization theory
Juang, Jer-Nan
1987-01-01
A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.
Mathematical correlation of modal parameter identification methods via system realization theory
Juang, J. N.
1986-01-01
A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.
Nuclear material enrichment identification method based on cross-correlation and high order spectra
International Nuclear Information System (INIS)
Yang Fan; Wei Biao; Feng Peng; Mi Deling; Ren Yong
2013-01-01
In order to enhance the sensitivity of nuclear material identification system (NMIS) against the change of nuclear material enrichment, the principle of high order statistic feature is introduced and applied to traditional NMIS. We present a new enrichment identification method based on cross-correlation and high order spectrum algorithm. By applying the identification method to NMIS, the 3D graphs with nuclear material character are presented and can be used as new signatures to identify the enrichment of nuclear materials. The simulation result shows that the identification method could suppress the background noises, electronic system noises, and improve the sensitivity against enrichment change to exponential order with no system structure modification. (authors)
Energy Technology Data Exchange (ETDEWEB)
Wu, Xiaokun; Han, Min; Ming, Dengming, E-mail: dming@fudan.edu.cn [Department of Physiology and Biophysics, School of Life Sciences, Fudan University, Shanghai (China)
2015-10-07
Membrane proteins play critically important roles in many cellular activities such as ions and small molecule transportation, signal recognition, and transduction. In order to fulfill their functions, these proteins must be placed in different membrane environments and a variety of protein-lipid interactions may affect the behavior of these proteins. One of the key effects of protein-lipid interactions is their ability to change the dynamics status of membrane proteins, thus adjusting their functions. Here, we present a multi-scaled normal mode analysis (mNMA) method to study the dynamics perturbation to the membrane proteins imposed by lipid bi-layer membrane fluctuations. In mNMA, channel proteins are simulated at all-atom level while the membrane is described with a coarse-grained model. mNMA calculations clearly show that channel gating motion can tightly couple with a variety of membrane deformations, including bending and twisting. We then examined bi-channel systems where two channels were separated with different distances. From mNMA calculations, we observed both positive and negative gating correlations between two neighboring channels, and the correlation has a maximum as the channel center-to-center distance is close to 2.5 times of their diameter. This distance is larger than recently found maximum attraction distance between two proteins embedded in membrane which is 1.5 times of the protein size, indicating that membrane fluctuation might impose collective motions among proteins within a larger area. The hybrid resolution feature in mNMA provides atomic dynamics information for key components in the system without costing much computer resource. We expect it to be a conventional simulation tool for ordinary laboratories to study the dynamics of very complicated biological assemblies. The source code is available upon request to the authors.
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L., E-mail: prii.ramos@gmail.com, E-mail: camunita@ipen.br, E-mail: alapolli@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)
2017-07-01
The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)
International Nuclear Information System (INIS)
Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L.
2017-01-01
The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)
A Method for Correlation of Gravestone Weathering and Air Quality (SO2), West Amidlands, UK
Carlson, Michael John
From the beginning of the Industrial Revolution through the environmental revolution of the 1970s Britain suffered the effects of poor air quality primarily from particulate matter and acid in the form of NOx and SO x compounds. Air quality stations across the region recorded SO 2 beginning in the 1960s however the direct measurement of air quality prior to 1960 is lacking and only anecdotal notations exist. Proxy records including lung tissue samples, particulates in sediments cores, lake acidification studies and gravestone weathering have all been used to reconstruct the history of air quality. A 120-year record of acid deposition reconstructed from lead-lettered marble gravestone weathering combined with SO2 measurements from the air monitoring network across the West Midlands, UK region beginning in the 1960s form the framework for this study. The study seeks to create a spatial and temporal correlation between the gravestone weathering and measured SO 2. Successful correlation of the dataset from 1960s to the 2000s would allow a paleo-air quality record to be generated from the 120-year record of gravestone weathering. Decadal gravestone weathering rates can be estimated by non-linear regression analysis of stone loss at individual cemeteries. Gravestone weathering rates are interpolated across the region through Empirical Bayesian Kriging (EBK) methods performed through ArcGISRTM and through a land use based approach based on digitized maps of land use. Both methods of interpolation allow for the direct correlation of gravestone weathering and measured SO2 to be made. Decadal scale correlations of gravestone weathering rates and measured SO2 are very weak and non-existent for both EBK and the land use based approach. Decadal results combined together on a larger scale for each respective method display a better visual correlation. However, the relative clustering of data at lower SO2 concentrations and the lack of data at higher SO2 concentrations make the
Tibi, R.; Young, C. J.; Gonzales, A.; Ballard, S.; Encarnacao, A. V.
2016-12-01
The matched filtering technique involving the cross-correlation of a waveform of interest with archived signals from a template library has proven to be a powerful tool for detecting events in regions with repeating seismicity. However, waveform correlation is computationally expensive, and therefore impractical for large template sets unless dedicated distributed computing hardware and software are used. In this study, we introduce an Approximate Nearest Neighbor (ANN) approach that enables the use of very large template libraries for waveform correlation without requiring a complex distributed computing system. Our method begins with a projection into a reduced dimensionality space based on correlation with a randomized subset of the full template archive. Searching for a specified number of nearest neighbors is accomplished by using randomized K-dimensional trees. We used the approach to search for matches to each of 2700 analyst-reviewed signal detections reported for May 2010 for the IMS station MKAR. The template library in this case consists of a dataset of more than 200,000 analyst-reviewed signal detections for the same station from 2002-2014 (excluding May 2010). Of these signal detections, 60% are teleseismic first P, and 15% regional phases (Pn, Pg, Sn, and Lg). The analyses performed on a standard desktop computer shows that the proposed approach performs the search of the large template libraries about 20 times faster than the standard full linear search, while achieving recall rates greater than 80%, with the recall rate increasing for higher correlation values. To decide whether to confirm a match, we use a hybrid method involving a cluster approach for queries with two or more matches, and correlation score for single matches. Of the signal detections that passed our confirmation process, 52% were teleseismic first P, and 30% were regional phases.
International Nuclear Information System (INIS)
Andreucci, N.
1985-04-01
Deep penetration transport problems in complex systems joint to heterogeneous source (Q) sampling give rise to some difficulties in evaluating leakage and fluxes on a detector point. To overcome these difficulties we have solved both the adjoint Boltzmann flux (phi*) equation and following scalar-dual equation: ∫Qphi* dP - ∫Q*phi dP = ∫phiphi* Ω . n dΣ dΩ dE dt + ∫ [phiphi*]sub(0)sup(T)/v dr dΩ dE D = (phase space). With a suitable choice for the domain D, for Q* and for the boundary conditions, an adjoint flux calculation allows us to obtain simultaneously the Q-source contribution and the detection (or leakage) spectrum. Compared to direct methods with importance sampling, the adjoint methods give very low-cost and faithful results
Czech Academy of Sciences Publication Activity Database
Řezáč, Jan; Riley, Kevin Eugene; Hobza, Pavel
2012-01-01
Roč. 33, č. 6 (2012), s. 691-694 ISSN 0192-8651 R&D Projects: GA MŠk LC512 Grant - others:European Social Fund(XE) CZ.1.05/2.1.00/03.0058 Institutional research plan: CEZ:AV0Z40550506 Keywords : post-HF methods * molecular geometry * benchmark calculations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.835, year: 2012
Energy Technology Data Exchange (ETDEWEB)
Slorenskiy, P V; Kopylov, V Ye
1981-01-01
The use of remote methods marks a basically new stage in technological study of the taiga regions of East Siberia. In interpreting the multizonal space photographs of the middle flow of the Viliyu River, a number of different systems of lineaments are isolated. They reflect the common features of the deep structure of the region. Their course and the position control the position of local structures and the distribution of fractured collectors.
Directory of Open Access Journals (Sweden)
Kolgotin Alexei
2016-01-01
Full Text Available Correlation relationships between aerosol microphysical parameters and optical data are investigated. The results show that surface-area concentrations and extinction coefficients are linearly correlated with a correlation coefficient above 0.99 for arbitrary particle size distribution. The correlation relationships that we obtained can be used as constraints in our inversion of optical lidar data. Simulation studies demonstrate a significant stabilization of aerosol microphysical data products if we apply the gradient correlation method in our traditional regularization technique.
International Nuclear Information System (INIS)
Van Limbergen, E.; Briot, E.; Drijkoningen, M.
1990-01-01
Inappropriate positioning of interstitial iridium 192 implants, used as booster dose in the breast conserving treatment of mammary cancer, may cause disturbing teleangiectasia of the breast skin, when high radiation doses are delivered on the dermal blood vessels. Based on the localization of the vascular plexuses in human breast skin, and on the dose distribution around different types of interstitial implants, a method is described to avoid overlap between the high dose area of the implant and the blood vessels in the skin. The latter are demonstrated to run within the first 5 mm under the epiderm. For source lengths varying from 5 to 8 cm, simple mathematical relations exist between the maximal security margin (MSM) and intersource distance (E) for single plane implants (MSM = 0.4 (E + 1)), double plane square implants (MSM = 0.4 E) and double plane triangular implants (MSM = 0.4 (E - 1)). We developed a device to measure precisely the distance between the radioactive wires and the overlying skin, along the whole source trajectory. Using this method, the occurrence of teleangiectasia in the breast skin after interstitial implants with Ir 192 may be significantly reduced
Energy Technology Data Exchange (ETDEWEB)
Shandiz, Mahdi Heravian; Khalilzadeh, Mohammadmahdi; Anvari, Kazem [Mashhad Branch, Islamic Azad University, Mashhad (Iran, Islamic Republic of); Layen, Ghorban Safaeian [Mashhad University of Medical Science, Mashhad (Iran, Islamic Republic of)
2015-03-15
In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43 degrees. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142.
International Nuclear Information System (INIS)
Shandiz, Mahdi Heravian; Khalilzadeh, Mohammadmahdi; Anvari, Kazem; Layen, Ghorban Safaeian
2015-01-01
In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43 degrees. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142.
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the
A three-dimensional correlation method for registration of medical images in radiology
International Nuclear Information System (INIS)
Georgiou, Michalakis; Sfakianakis, George N.; Nagel, Joachim H.
1998-01-01
The availability of methods to register multi-modality images in order to 'fuse' them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as p olar Shells . The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors)
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
Directory of Open Access Journals (Sweden)
E Ghasemikhah
2012-03-01
Full Text Available This study investigated the electronic properties of antiferromagnetic UBi2 metal by using ab initio calculations based on the density functional theory (DFT, employing the augmented plane waves plus local orbital method. We used the exact exchange for correlated electrons (EECE method to calculate the exchange-correlation energy under a variety of hybrid functionals. Electric field gradients (EFGs at the uranium site in UBi2 compound were calculated and compared with the experiment. The EFGs were predicted experimentally at the U site to be very small in this compound. The EFG calculated by the EECE functional are in agreement with the experiment. The densities of states (DOSs show that 5f U orbital is hybrided with the other orbitals. The plotted Fermi surfaces show that there are two kinds of charges on Fermi surface of this compound.
Giger, Maryellen L.; Chen, Chin-Tu; Armato, Samuel; Doi, Kunio
1999-10-26
A method and system for the computerized registration of radionuclide images with radiographic images, including generating image data from radiographic and radionuclide images of the thorax. Techniques include contouring the lung regions in each type of chest image, scaling and registration of the contours based on location of lung apices, and superimposition after appropriate shifting of the images. Specific applications are given for the automated registration of radionuclide lungs scans with chest radiographs. The method in the example given yields a system that spatially registers and correlates digitized chest radiographs with V/Q scans in order to correlate V/Q functional information with the greater structural detail of chest radiographs. Final output could be the computer-determined contours from each type of image superimposed on any of the original images, or superimposition of the radionuclide image data, which contains high activity, onto the radiographic chest image.
Estimation of velocity vector angles using the directional cross-correlation method
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt
2006-01-01
and then select the angle with the highest normalized correlation between directional signals. The approach is investigated using Field II simulations and data from the experimental ultrasound scanner RASMUS and a circulating flow rig with a parabolic flow having a peak velocity of 0.3 m/s. A 7 MHz linear array......A method for determining both velocity magnitude and angle in any direction is suggested. The method uses focusing along the velocity direction and cross-correlation for finding the correct velocity magnitude. The angle is found from beamforming directional signals in a number of directions...... transducer is used with a normal transmission of a focused ultrasound field. In the simulations the relative standard deviation of the velocity magnitude is between 0.7% and 7.7% for flow angles between 45 deg and 90 deg. The study showed that angle estimation by directional beamforming can be estimated...
Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X
2012-03-14
We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.
Goodpaster, Jason D; Barnes, Taylor A; Manby, Frederick R; Miller, Thomas F
2012-12-14
Density functional theory (DFT) embedding provides a formally exact framework for interfacing correlated wave-function theory (WFT) methods with lower-level descriptions of electronic structure. Here, we report techniques to improve the accuracy and stability of WFT-in-DFT embedding calculations. In particular, we develop spin-dependent embedding potentials in both restricted and unrestricted orbital formulations to enable WFT-in-DFT embedding for open-shell systems, and develop an orbital-occupation-freezing technique to improve the convergence of optimized effective potential calculations that arise in the evaluation of the embedding potential. The new techniques are demonstrated in applications to the van-der-Waals-bound ethylene-propylene dimer and to the hexa-aquairon(II) transition-metal cation. Calculation of the dissociation curve for the ethylene-propylene dimer reveals that WFT-in-DFT embedding reproduces full CCSD(T) energies to within 0.1 kcal/mol at all distances, eliminating errors in the dispersion interactions due to conventional exchange-correlation (XC) functionals while simultaneously avoiding errors due to subsystem partitioning across covalent bonds. Application of WFT-in-DFT embedding to the calculation of the low-spin/high-spin splitting energy in the hexaaquairon(II) cation reveals that the majority of the dependence on the DFT XC functional can be eliminated by treating only the single transition-metal atom at the WFT level; furthermore, these calculations demonstrate the substantial effects of open-shell contributions to the embedding potential, and they suggest that restricted open-shell WFT-in-DFT embedding provides better accuracy than unrestricted open-shell WFT-in-DFT embedding due to the removal of spin contamination.
International Nuclear Information System (INIS)
Morales, J.J.; Nuevo, J.M.; Rull, L.F.
1987-01-01
The new isothermic-isobaric MD(T,p,N) method of Nose and Hoover is applied in Molecular Dynamics simulations to both liquid and solid near the phase transition. We tested for an appropriate value of the isobaric friction coefficient before calculating the correlation length in the liquid and the disclinations per particle in solid on a big system of 2304 particles. The results are compared with those obtained by traditional MD simulation (E,V,N). (author)
Analytic methods for the Percus-Yevick hard sphere correlation functions
Directory of Open Access Journals (Sweden)
D. Henderson
2009-01-01
Full Text Available The Percus-Yevick theory for hard spheres provides simple accurate expressions for the correlation functions that have proven exceptionally useful. A summary of the author's lecture notes concerning three methods of obtaining these functions are presented. These notes are original only in part. However, they contain some helpful steps and simplifications. The purpose of this paper is to make these notes more widely available.
Sulaimon, Shodiya; Nasution, Henry; Aziz, Azhar Abdul; Abdul-Rahman, Abdul-Halim; Darus, Amer N
2014-01-01
The capillary tube is an important control device used in small vapor compression refrigeration systems such as window air-conditioners, household refrigerators and freezers. This paper develops a non-dimensional correlation based on the test results of the adiabatic capillary tube for the mass flow rate through the tube using a hydrocarbon refrigerant mixture of 89.3% propane and 10.7% butane (HCM). The Taguchi method, a statistical experimental design approach, was employed. This approach e...
Linear-scaling explicitly correlated treatment of solids: Periodic local MP2-F12 method
Energy Technology Data Exchange (ETDEWEB)
Usvyat, Denis, E-mail: denis.usvyat@chemie.uni-regensburg.de [Institute of Physical and Theoretical Chemistry, University of Regensburg, Universitätsstraße 31, D-93040 Regensburg (Germany)
2013-11-21
Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.
Feasibility of the correlation curves method in calorimeters of different types
Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.
2014-01-01
The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...
DEFF Research Database (Denmark)
Cheng, Hongyuan; Kontogeorgis, Georgios; Stenby, Erling Halfdan
2005-01-01
), the bioconcentration factor (BCF), and the toxicity. Kow values of alcohol ethoxylates are difficult to measure. Existing methods such as those in commercial software like ACD,ClogP and KowWin have not been applied to surfactants, and they fail for heavy alcohol ethoxylates (alkyl carbon numbers above 12). Thus...... and toxicity of alcohol ethoxylates are correlated with their Kow. The proposed approach can be extended to other families of nonionic surfactants....
Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A
2014-12-23
A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.
Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods
Frota, Oleci Pereira; Ferreira, Adriano Menis; Guerra, Odanir Garcia; Rigotti, Marcelo Alessandro; Andrade, Denise de; Borges, Najla Moreira Amaral; Almeida, Margarete Teresa Gottardo de
2017-01-01
ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D) of high-touch clinical surfaces (HTCS) in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity an...
The Diagnosis of Internal Leakage of Control Valve Based on the Grey Correlation Analysis Method
Directory of Open Access Journals (Sweden)
Zheng DING
2014-07-01
Full Text Available The valve plays an important part in the industrial automation system. Whether it operates normally or not relates with the quality of the products directly while its faults are relatively common because of bad working conditions. And the internal leakage is one of the common faults. Consequently, this paper sets up the experimental platform to make the valve in different working condition and collect relevant data online. Then, diagnose the internal leakage of the valve by using the grey correlation analysis method. The results show that this method can not only diagnose the internal leakage of valve accurately, but also distinguish fault degree quantitatively.
International Nuclear Information System (INIS)
Pott, R.A.; Koch, W.; Leitner, L.
1986-01-01
The orientation of the easy magnetization axis of magnetic particles is a key parameter of the recording performance of magnetic recording media. Usually the orientation is measured by magnetic methods, but the applicability of the Moessbauer Spectroscopy has also been shown in the past. The authors show and discuss the correlations between the results obtained by magnetic and Moessbauer measurements for the example of several magnetic tapes. They demonstrate that by a combination of both methods one is even able to estimate the mean canting angles distribution width of the easy axis of magnetization. (Auth.)
Buehring, B; Siglinsky, E; Krueger, D; Evans, W; Hellerstein, M; Yamada, Y; Binkley, N
2018-03-01
DXA-measured lean mass is often used to assess muscle mass but has limitations. Thus, we compared DXA lean mass with two novel methods-bioelectric impedance spectroscopy and creatine (methyl-d3) dilution. The examined methodologies did not measure lean mass similarly and the correlation with muscle biomarkers/function varied. Muscle function tests predict adverse health outcomes better than lean mass measurement. This may reflect limitations of current mass measurement methods. Newer approaches, e.g., bioelectric impedance spectroscopy (BIS) and creatine (methyl-d3) dilution (D3-C), may more accurately assess muscle mass. We hypothesized that BIS and D3-C measured muscle mass would better correlate with function and bone/muscle biomarkers than DXA measured lean mass. Evaluations of muscle/lean mass, function, and serum biomarkers were obtained in older community-dwelling adults. Mass was assessed by DXA, BIS, and orally administered D3-C. Grip strength, timed up and go, and jump power were examined. Potential muscle/bone serum biomarkers were measured. Mass measurements were compared with functional and serum data using regression analyses; differences between techniques were determined by paired t tests. Mean (SD) age of the 112 (89F/23M) participants was 80.6 (6.0) years. The lean/muscle mass assessments were correlated (.57-.88) but differed (p Lean mass measures were unrelated to the serum biomarkers measured. These three methodologies do not similarly measure muscle/lean mass and should not be viewed as being equivalent. Functional tests assessing maximal muscle strength/power (grip strength and jump power) correlated with all mass measures whereas gait speed was not. None of the selected serum measures correlated with mass. Efforts to optimize muscle mass assessment and identify their relationships with health outcomes are needed.
Increasing the computational efficient of digital cross correlation by a vectorization method
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
Irreducible Greens' Functions method in the theory of highly correlated systems
International Nuclear Information System (INIS)
Kuzemsky, A.L.
1994-09-01
The self-consistent theory of the correlation effects in Highly Correlated Systems (HCS) is presented. The novel Irreducible Green's Function (IGF) method is discussed in detail for the Hubbard model and random Hubbard model. The interpolation solution for the quasiparticle spectrum, which is valid for both the atomic and band limit is obtained. The (IGF) method permits to calculate the quasiparticle spectra of many-particle systems with the complicated spectra and strong interaction in a very natural and compact way. The essence of the method deeply related to the notion of the Generalized Mean Fields (GMF), which determine the elastic scattering corrections. The inelastic scattering corrections leads to the damping of the quasiparticles and are the main topic of the present consideration. The calculation of the damping has been done in a self-consistent way for both limits. For the random Hubbard model the weak coupling case has been considered and the self-energy operator has been calculated using the combination of the IGF method and Coherent Potential Approximation (CPA). The other applications of the method to the s-f model, Anderson model, Heisenberg antiferromagnet, electron-phonon interaction models and quasiparticle tunneling are discussed briefly. (author). 79 refs
Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods
Directory of Open Access Journals (Sweden)
Oleci Pereira Frota
Full Text Available ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D of high-touch clinical surfaces (HTCS in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity and stains were detected in visual inspection; when ≥2.5 colony forming units per cm2 were found in culture; when ≥5 relative light units per cm2 were found at the ATP-bioluminescence assay. Results: 720 analyses were performed, 240 per method. The overall rates of clean surfaces per visual inspection, culture and ATP-bioluminescence assay were 8.3%, 20.8% and 44.2% before C&D, and 92.5%, 50% and 84.2% after C&D, respectively (p<0.001. There were only occasional statistically significant relationships between methods. Conclusion: the methods did not present a good correlation, neither quantitative nor qualitatively.
Residual stresses measurement by using ring-core method and 3D digital image correlation technique
International Nuclear Information System (INIS)
Hu, Zhenxing; Xie, Huimin; Zhu, Jianguo; Wang, Huaixi; Lu, Jian
2013-01-01
Ring-core method/three-dimensional digital image correlation (3D DIC) residual stresses measurement is proposed. Ring-core cutting is a mechanical stress relief method, and combining with 3D DIC system the deformation of the specimen surface can be measured. An optimization iteration method is proposed to obtain the residual stress and rigid-body motion. The method has the ability to cut an annular trench at a different location out of the field of view. A compression test is carried out to demonstrate how residual stress is determined by using 3D DIC system and outfield measurement. The results determined by the approach are in good agreement with the theoretical value. Ring-core/3D DIC has shown its robustness to determine residual stress and can be extended to application in the engineering field. (paper)
Directory of Open Access Journals (Sweden)
Taghi Baghdadi
2017-05-01
Full Text Available Background: The aim of this study was to evaluate the idiopathic congenital clubfoot deformity treated by Ponseti method to determine the different factors such as radiological investigations that may have relations with the risk of failure and recurrence in mid-term follow-up of the patients. Methods: Since 2006 to 2011, 226 feet from 149 patients with idiopathic congenital clubfoot were treated with weekly castings by Ponseti method. Anteroposterior and lateral foot radiographies were performed at the final follow-up visit and the data from clinical and radiological outcomes were analysed. Results: In our patients, 191(84.9% feet required percutaneous tenotomy. The successful correction rate was 92% indication no need for further surgical correction. No significant correlation was found between the remained deformity rate and the severity of the deformity and compliance of using the brace (P=0.108 and 0.207 respectively. The remained deformity rate had an inverse association with the beginning age of treatment (P=0.049. No significant correlation was found between the percutaneous tetonomy and passive dorsiflexion range (P=0.356. Conclusion: According to our results treatment with the Ponseti method resulted in poor or no correlation. The diagnosis of clubfoot is a clinical judgment; therefore, the outcome of the treatment must only be clinically evaluated. Although the Ponseti method can retrieve the normal shape of the foot, it fails to treat the bone deformities and eventually leads to remained radiologic deformity. Further studiesare suggested to define a different modification that can address the abnormal angles between the foot and ankle bones to minimize the risk of recurrence.
Directory of Open Access Journals (Sweden)
Shodiya Sulaimon
2014-07-01
Full Text Available The capillary tube is an important control device used in small vapor compression refrigeration systems such as window air-conditioners, household refrigerators and freezers. This paper develops a non-dimensional correlation based on the test results of the adiabatic capillary tube for the mass flow rate through the tube using a hydrocarbon refrigerant mixture of 89.3% propane and 10.7% butane (HCM. The Taguchi method, a statistical experimental design approach, was employed. This approach explores the economic benefit that lies in studies of this nature, where only a small number of experiments are required and yet valid results are obtained. Considering the effects of the capillary tube geometry and the inlet condition of the tube, dimensionless parameters were chosen. The new correlation was also based on the Buckingham Pi theorem. This correlation predicts 86.67% of the present experimental data within a relative deviation of -10% to +10%. The predictions by this correlation were also compared with results in published literature.
Prediction of shear wave velocity using empirical correlations and artificial intelligence methods
Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad
2014-06-01
Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.
A New Method to Measure Crack Extension in Nuclear Graphite Based on Digital Image Correlation
Directory of Open Access Journals (Sweden)
Shigang Lai
2017-01-01
Full Text Available Graphite components, used as moderators, reflectors, and core-support structures in a High-Temperature Gas-Cooled Reactor, play an important role in the safety of the reactor. Specifically, they provide channels for the fuel elements, control rods, and coolant flow. Fracture is the main failure mode for graphite, and breaching of the above channels by crack extension will seriously threaten the safety of a reactor. In this paper, a new method based on digital image correlation (DIC is introduced for measuring crack extension in brittle materials. Cross-correlation of the displacements measured by DIC with a step function was employed to identify the advancing crack tip in a graphite beam specimen under three-point bending. The load-crack extension curve, which is required for analyzing the R-curve and tension softening behaviors, was obtained for this material. Furthermore, a sensitivity analysis of the threshold value employed for the cross-correlation parameter in the crack identification process was conducted. Finally, the results were verified using the finite element method.
Prediction of shear wave velocity using empirical correlations and artificial intelligence methods
Directory of Open Access Journals (Sweden)
Shahoo Maleki
2014-06-01
Full Text Available Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR and Back-Propagation Neural Network (BPNN. Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.
Yao, X. F.; Xiong, T. C.; Xu, H. M.; Wan, J. P.; Long, G. R.
2008-11-01
The residual stresses of the PMMA (polymethyl methacrylate) specimens after being drilled, reamed and polished respectively are investigated using the digital speckle correlation experimental method,. According to the displacement fields around the correlated calculated region, the polynomial curve fitting method is used to obtain the continuous displacement fields, and the strain fields can be obtained from the derivative of the displacement fields. Considering the constitutive equation of the material, the expression of the residual stress can be presented. During the data processing, according to the fitting effect of the data, the calculation region of the correlated speckles and the degree of the polynomial fitting curve is decided. These results show that the maximum stress is at the hole-wall of the drilling hole specimen and with the increasing of the diameter of the drilled hole, the residual stress resulting from the hole drilling increases, whereas the process of reaming and polishing hole can reduce the residual stress. The relative large discrete degree of the residual stress is due to the chip removal ability of the drill bit, the cutting feed of the drill and other various reasons.
Cinar, A. F.; Barhli, S. M.; Hollis, D.; Flansbjer, M.; Tomlinson, R. A.; Marrow, T. J.; Mostafavi, M.
2017-09-01
Digital image correlation has been routinely used to measure full-field displacements in many areas of solid mechanics, including fracture mechanics. Accurate segmentation of the crack path is needed to study its interaction with the microstructure and stress fields, and studies of crack behaviour, such as the effect of closure or residual stress in fatigue, require data on its opening displacement. Such information can be obtained from any digital image correlation analysis of cracked components, but it collection by manual methods is quite onerous, particularly for massive amounts of data. We introduce the novel application of Phase Congruency to detect and quantify cracks and their opening. Unlike other crack detection techniques, Phase Congruency does not rely on adjustable threshold values that require user interaction, and so allows large datasets to be treated autonomously. The accuracy of the Phase Congruency based algorithm in detecting cracks is evaluated and compared with conventional methods such as Heaviside function fitting. As Phase Congruency is a displacement-based method, it does not suffer from the noise intensification to which gradient-based methods (e.g. strain thresholding) are susceptible. Its application is demonstrated to experimental data for cracks in quasi-brittle (Granitic rock) and ductile (Aluminium alloy) materials.
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (Pbiometrics, SCHOL, and BMI, and particularly highlight the value of the
Directory of Open Access Journals (Sweden)
Daniela Vieira Roehe
2008-04-01
Full Text Available OBJETIVO: O objetivo do estudo é realizar as medidas da Distância Interpupilar (DIP entre a régua milimetrada e o autorrefrator e comparar os resultados entre si. MÉTODOS: Cento e trinta e cinco pacientes foram submetidos à avaliação da DIP. Cada paciente foi examinado pelo mesmo examinador, com dois métodos: régua milimetrada e autorrefrator. RESULTADOS: Houve boa concordância entre as medidas apuradas pelos dois métodos, não apresentando diferença estatísticamente significante entre as médias e variabilidades. CONCLUSÃO: A inexatidão presente no método da régua milimetrada pode responder pelos casos com diferenças clinicamente significantes, porém proporcionalmente estes casos não invalidam a reprodutibilidade do autorrefrator.PURPOSE: There are several methods to measure the Interpupilary Distance (IPD, but the method used in most office is the milimetric ruler. Currently, the autorrefractor is used routinely to find for the subjective evaluation of the refractive error, but the IPD from the autorrefractor is often not compared with the traditional method of measurement.The objective of the study is to achieve the measures of IPD between the two methods and compare the results with each other. METHODS: One hundred and thirty-five patients underwent the assessment of the DIP. Each patient was examined by the same examiner, with two methods: milimetric ruler and autorrefractor. RESULTS: There was good agreement between the measures cleared by the two methods, showing no statistically significant difference between mean and variability. CONCLUSION: The inaccuracy in the method of the present milimetric ruler can answer the cases with clinically significant differences, however proportionately these cases do not invalidate the reproducibility of the autorrefractor.
Lee, Tsung-Han
Strongly correlated materials are a class of materials that cannot be properly described by the Density Functional Theory (DFT), which is a single-particle approximation to the original many-body electronic Hamiltonian. These systems contain d or f orbital electrons, i.e., transition metals, actinides, and lanthanides compounds, for which the electron-electron interaction (correlation) effects are too strong to be described by the single-particle approximation of DFT. Therefore, complementary many-body methods have been developed, at the model Hamiltonians level, to describe these strong correlation effects. Dynamical Mean Field Theory (DMFT) and Rotationally Invariant Slave-Boson (RISB) approaches are two successful methods that can capture the correlation effects for a broad interaction strength. However, these many-body methods, as applied to model Hamiltonians, treat the electronic structure of realistic materials in a phenomenological fashion, which only allow to describe their properties qualitatively. Consequently, the combination of DFT and many body methods, e.g., Local Density Approximation augmented by RISB and DMFT (LDA+RISB and LDA+DMFT), have been recently proposed to combine the advantages of both methods into a quantitative tool to analyze strongly correlated systems. In this dissertation, we studied the possible improvements of these approaches, and tested their accuracy on realistic materials. This dissertation is separated into two parts. In the first part, we studied the extension of DMFT and RISB in three directions. First, we extended DMFT framework to investigate the behavior of the domain wall structure in metal-Mott insulator coexistence regime by studying the unstable solution describing the domain wall. We found that this solution, differing qualitatively from both the metallic and the insulating solutions, displays an insulating-like behavior in resistivity while carrying a weak metallic character in its electronic structure. Second, we
Directory of Open Access Journals (Sweden)
Fredrik Nilsson
2018-03-01
Full Text Available Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended dynamical mean-field theory ( G W +EDMFT. The emphasis is on conceptual and theoretical aspects rather than technical ones.
Muravsky, Leonid I.; Kmet', Arkady B.; Stasyshyn, Ihor V.; Voronyak, Taras I.; Bobitski, Yaroslav V.
2018-06-01
A new three-step interferometric method with blind phase shifts to retrieve phase maps (PMs) of smooth and low-roughness engineering surfaces is proposed. Evaluating of two unknown phase shifts is fulfilled by using the interframe correlation between interferograms. The method consists of two stages. The first stage provides recording of three interferograms of a test object and their processing including calculation of unknown phase shifts, and retrieval of a coarse PM. The second stage implements firstly separation of high-frequency and low-frequency PMs and secondly producing of a fine PM consisting of areal surface roughness and waviness PMs. Extraction of the areal surface roughness and waviness PMs is fulfilled by using a linear low-pass filter. The computer simulation and experiments fulfilled to retrieve a gauge block surface area and its areal surface roughness and waviness have confirmed the reliability of the proposed three-step method.
Starosta, K.; Dewald, A.
2007-04-01
Transition rate measurements are reported for the 2^+1 and 2^+2 states in the N=Z nucleus ^64Ge. The measurement was done utilizing the Recoil Distance Method (RDM) and a unique combination of state of the art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate energy single neutron knock-out reaction. RDM studies of knock-out and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for intermediate-spin excited states in a wide range of exotic nuclei. The large-scale Shell Model calculations applying the recently developed GXPF1A interaction are in excellent agreement with the above results. Theoretical analysis suggests that ^64Ge is a collective γ-soft anharmonic vibrator.
Starosta, K; Dewald, A; Dunomes, A; Adrich, P; Amthor, A M; Baumann, T; Bazin, D; Bowen, M; Brown, B A; Chester, A; Gade, A; Galaviz, D; Glasmacher, T; Ginter, T; Hausmann, M; Horoi, M; Jolie, J; Melon, B; Miller, D; Moeller, V; Norris, R P; Pissulla, T; Portillo, M; Rother, W; Shimbara, Y; Stolz, A; Vaman, C; Voss, P; Weisshaar, D; Zelevinsky, V
2007-07-27
Transition rate measurements are reported for the 2(1)+ and 2(2)+ states in N=Z 64Ge. The experimental results are in excellent agreement with large-scale shell-model calculations applying the recently developed GXPF1A interactions. The measurement was done using the recoil distance method (RDM) and a unique combination of state-of-the-art instruments at the National Superconducting Cyclotron Laboratory (NSCL). States of interest were populated via an intermediate-energy single-neutron knockout reaction. RDM studies of knockout and fragmentation reaction products hold the promise of reaching far from stability and providing lifetime information for excited states in a wide range of nuclei.
A three-dimensional correlation method for registration of medical images in radiology
Energy Technology Data Exchange (ETDEWEB)
Georgiou, Michalakis; Sfakianakis, George N [Department of Radiology, University of Miami, Jackson Memorial Hospital, Miami, FL 33136 (United States); Nagel, Joachim H [Institute of Biomedical Engineering, University of Stuttgart, Stuttgart 70174 (Germany)
1999-12-31
The availability of methods to register multi-modality images in order to `fuse` them to correlate their information is increasingly becoming an important requirement for various diagnostic and therapeutic procedures. A variety of image registration methods have been developed but they remain limited to specific clinical applications. Assuming rigid body transformation, two images can be registered if their differences are calculated in terms of translation, rotation and scaling. This paper describes the development and testing of a new correlation based approach for three-dimensional image registration. First, the scaling factors introduced by the imaging devices are calculated and compensated for. Then, the two images become translation invariant by computing their three-dimensional Fourier magnitude spectra. Subsequently, spherical coordinate transformation is performed and then the three-dimensional rotation is computed using a novice approach referred to as {sup p}olar Shells{sup .} The method of polar shells maps the three angles of rotation into one rotation and two translations of a two-dimensional function and then proceeds to calculate them using appropriate transformations based on the Fourier invariance properties. A basic assumption in the method is that the three-dimensional rotation is constrained to one large and two relatively small angles. This assumption is generally satisfied in normal clinical settings. The new three-dimensional image registration method was tested with simulations using computer generated phantom data as well as actual clinical data. Performance analysis and accuracy evaluation of the method using computer simulations yielded errors in the sub-pixel range. (authors) 6 refs., 3 figs.
A new method to detect event-related potentials based on Pearson's correlation.
Giroldini, William; Pederzoli, Luciano; Bilucaglia, Marco; Melloni, Simone; Tressoldi, Patrizio
2016-12-01
Event-related potentials (ERPs) are widely used in brain-computer interface applications and in neuroscience. Normal EEG activity is rich in background noise, and therefore, in order to detect ERPs, it is usually necessary to take the average from multiple trials to reduce the effects of this noise. The noise produced by EEG activity itself is not correlated with the ERP waveform and so, by calculating the average, the noise is decreased by a factor inversely proportional to the square root of N , where N is the number of averaged epochs. This is the easiest strategy currently used to detect ERPs, which is based on calculating the average of all ERP's waveform, these waveforms being time- and phase-locked. In this paper, a new method called GW6 is proposed, which calculates the ERP using a mathematical method based only on Pearson's correlation. The result is a graph with the same time resolution as the classical ERP and which shows only positive peaks representing the increase-in consonance with the stimuli-in EEG signal correlation over all channels. This new method is also useful for selectively identifying and highlighting some hidden components of the ERP response that are not phase-locked, and that are usually hidden in the standard and simple method based on the averaging of all the epochs. These hidden components seem to be caused by variations (between each successive stimulus) of the ERP's inherent phase latency period (jitter), although the same stimulus across all EEG channels produces a reasonably constant phase. For this reason, this new method could be very helpful to investigate these hidden components of the ERP response and to develop applications for scientific and medical purposes. Moreover, this new method is more resistant to EEG artifacts than the standard calculations of the average and could be very useful in research and neurology. The method we are proposing can be directly used in the form of a process written in the well
Directory of Open Access Journals (Sweden)
Bahman Navidshad
2012-02-01
Full Text Available The applications of conventional culture-dependent assays to quantify bacteria populations are limited by their dependence on the inconsistent success of the different culture-steps involved. In addition, some bacteria can be pathogenic or a source of endotoxins and pose a health risk to the researchers. Bacterial quantification based on the real-time PCR method can overcome the above-mentioned problems. However, the quantification of bacteria using this approach is commonly expressed as absolute quantities even though the composition of samples (like those of digesta can vary widely; thus, the final results may be affected if the samples are not properly homogenized, especially when multiple samples are to be pooled together before DNA extraction. The objective of this study was to determine the correlation coefficients between four different methods of expressing the output data of real-time PCR-based bacterial quantification. The four methods were: (i the common absolute method expressed as the cell number of specific bacteria per gram of digesta; (ii the Livak and Schmittgen, ΔΔCt method; (iii the Pfaffl equation; and (iv a simple relative method based on the ratio of cell number of specific bacteria to the total bacterial cells. Because of the effect on total bacteria population in the results obtained using ΔCt-based methods (ΔΔCt and Pfaffl, these methods lack the acceptable consistency to be used as valid and reliable methods in real-time PCR-based bacterial quantification studies. On the other hand, because of the variable compositions of digesta samples, a simple ratio of cell number of specific bacteria to the corresponding total bacterial cells of the same sample can be a more accurate method to quantify the population.
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
Correlation and agreement of a digital and conventional method to measure arch parameters.
Nawi, Nes; Mohamed, Alizae Marny; Marizan Nor, Murshida; Ashar, Nor Atika
2018-01-01
The aim of the present study was to determine the overall reliability and validity of arch parameters measured digitally compared to conventional measurement. A sample of 111 plaster study models of Down syndrome (DS) patients were digitized using a blue light three-dimensional (3D) scanner. Digital and manual measurements of defined parameters were performed using Geomagic analysis software (Geomagic Studio 2014 software, 3D Systems, Rock Hill, SC, USA) on digital models and with a digital calliper (Tuten, Germany) on plaster study models. Both measurements were repeated twice to validate the intraexaminer reliability based on intraclass correlation coefficients (ICCs) using the independent t test and Pearson's correlation, respectively. The Bland-Altman method of analysis was used to evaluate the agreement of the measurement between the digital and plaster models. No statistically significant differences (p > 0.05) were found between the manual and digital methods when measuring the arch width, arch length, and space analysis. In addition, all parameters showed a significant correlation coefficient (r ≥ 0.972; p digital and manual measurements. Furthermore, a positive agreement between digital and manual measurements of the arch width (90-96%), arch length and space analysis (95-99%) were also distinguished using the Bland-Altman method. These results demonstrate that 3D blue light scanning and measurement software are able to precisely produce 3D digital model and measure arch width, arch length, and space analysis. The 3D digital model is valid to be used in various clinical applications.
Energy Technology Data Exchange (ETDEWEB)
McGrath, Deirdre M., E-mail: d.mcgrath@sheffield.ac.uk; Lee, Jenny; Foltz, Warren D. [Radiation Medicine Program, Princess Margaret Hospital, University Health Network, Toronto, Ontario M5G 2M9 (Canada); Samavati, Navid [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3G9 (Canada); Jewett, Michael A. S. [Departments of Surgery (Urology) and Surgical Oncology, Princess Margaret Cancer Centre, University Health Network and University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Kwast, Theo van der [Pathology Department, University Health Network, Toronto, Ontario M5G 2C4 (Canada); Chung, Peter [Radiation Medicine Program, Princess Margaret Hospital, University Health Network and the University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Ménard, Cynthia [Radiation Medicine Program, Princess Margaret Hospital, University Health Network and the University of Toronto, Toronto, Ontario M5G 2M9, Canada and Centre Hospitalier de l’Université de Montréal, 1058 Rue Saint-Denis, Montréal, Québec H2X 3J4 (Canada); Brock, Kristy K. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48108 (United States)
2016-03-15
Purpose: Validation of MRI-guided tumor boundary delineation for targeted prostate cancer therapy is achieved via correlation with gold-standard histopathology of radical prostatectomy specimens. Challenges to accurate correlation include matching the pathology sectioning plane with the in vivo imaging slice plane and correction for the deformation that occurs between in vivo imaging and histology. A methodology is presented for matching of the histological sectioning angle and position to the in vivo imaging slices. Methods: Patients (n = 4) with biochemical failure following external beam radiotherapy underwent diagnostic MRI to confirm localized recurrence of prostate cancer, followed by salvage radical prostatectomy. High-resolution 3-D MRI of the ex vivo specimens was acquired to determine the pathology sectioning angle that best matched the in vivo imaging slice plane, using matching anatomical features and implanted fiducials. A novel sectioning device was developed to guide sectioning at the correct angle, and to assist the insertion of reference dye marks to aid in histopathology reconstruction. Results: The percentage difference in the positioning of the urethra in the ex vivo pathology sections compared to the positioning in in vivo images was reduced from 34% to 7% through slicing at the best match angle. Reference dye marks were generated, which were visible in ex vivo imaging, in the tissue sections before and after processing, and in histology sections. Conclusions: The method achieved an almost fivefold reduction in the slice-matching error and is readily implementable in combination with standard MRI technology. The technique will be employed to generate datasets for correlation of whole-specimen prostate histopathology with in vivo diagnostic MRI using 3-D deformable registration, allowing assessment of the sensitivity and specificity of MRI parameters for prostate cancer. Although developed specifically for prostate, the method is readily
International Nuclear Information System (INIS)
McGrath, Deirdre M.; Lee, Jenny; Foltz, Warren D.; Samavati, Navid; Jewett, Michael A. S.; Kwast, Theo van der; Chung, Peter; Ménard, Cynthia; Brock, Kristy K.
2016-01-01
Purpose: Validation of MRI-guided tumor boundary delineation for targeted prostate cancer therapy is achieved via correlation with gold-standard histopathology of radical prostatectomy specimens. Challenges to accurate correlation include matching the pathology sectioning plane with the in vivo imaging slice plane and correction for the deformation that occurs between in vivo imaging and histology. A methodology is presented for matching of the histological sectioning angle and position to the in vivo imaging slices. Methods: Patients (n = 4) with biochemical failure following external beam radiotherapy underwent diagnostic MRI to confirm localized recurrence of prostate cancer, followed by salvage radical prostatectomy. High-resolution 3-D MRI of the ex vivo specimens was acquired to determine the pathology sectioning angle that best matched the in vivo imaging slice plane, using matching anatomical features and implanted fiducials. A novel sectioning device was developed to guide sectioning at the correct angle, and to assist the insertion of reference dye marks to aid in histopathology reconstruction. Results: The percentage difference in the positioning of the urethra in the ex vivo pathology sections compared to the positioning in in vivo images was reduced from 34% to 7% through slicing at the best match angle. Reference dye marks were generated, which were visible in ex vivo imaging, in the tissue sections before and after processing, and in histology sections. Conclusions: The method achieved an almost fivefold reduction in the slice-matching error and is readily implementable in combination with standard MRI technology. The technique will be employed to generate datasets for correlation of whole-specimen prostate histopathology with in vivo diagnostic MRI using 3-D deformable registration, allowing assessment of the sensitivity and specificity of MRI parameters for prostate cancer. Although developed specifically for prostate, the method is readily
Finite-Temperature Variational Monte Carlo Method for Strongly Correlated Electron Systems
Takai, Kensaku; Ido, Kota; Misawa, Takahiro; Yamaji, Youhei; Imada, Masatoshi
2016-03-01
A new computational method for finite-temperature properties of strongly correlated electrons is proposed by extending the variational Monte Carlo method originally developed for the ground state. The method is based on the path integral in the imaginary-time formulation, starting from the infinite-temperature state that is well approximated by a small number of certain random initial states. Lower temperatures are progressively reached by the imaginary-time evolution. The algorithm follows the framework of the quantum transfer matrix and finite-temperature Lanczos methods, but we extend them to treat much larger system sizes without the negative sign problem by optimizing the truncated Hilbert space on the basis of the time-dependent variational principle (TDVP). This optimization algorithm is equivalent to the stochastic reconfiguration (SR) method that has been frequently used for the ground state to optimally truncate the Hilbert space. The obtained finite-temperature states allow an interpretation based on the thermal pure quantum (TPQ) state instead of the conventional canonical-ensemble average. Our method is tested for the one- and two-dimensional Hubbard models and its accuracy and efficiency are demonstrated.
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, SE, University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-11-01
M104 (NGC 4594; the Sombrero galaxy) is a nearby, well-studied elliptical galaxy included in scores of surveys focused on understanding the details of galaxy evolution. Despite the importance of observations of M104, a consensus distance has not yet been established. Here, we use newly obtained Hubble Space Telescope optical imaging to measure the distance to M104 based on the tip of the red giant branch (TRGB) method. Our measurement yields the distance to M104 to be 9.55 ± 0.13 ± 0.31 Mpc equivalent to a distance modulus of 29.90 ± 0.03 ± 0.07 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian maximum likelihood technique that reduces measurement uncertainties. The most discrepant previous results are due to Tully–Fisher method distances, which are likely inappropriate for M104 given its peculiar morphology and structure. Our results are part of a larger program to measure accurate distances to a sample of well-known spiral galaxies (including M51, M74, and M63) using the TRGB method.
Directory of Open Access Journals (Sweden)
Shuai Wang
Full Text Available Individual genes or regions are still commonly used to estimate the phylogenetic relationships among viral isolates. The genomic regions that can faithfully provide assessments consistent with those predicted with full-length genome sequences would be preferable to serve as good candidates of the phylogenetic markers for molecular epidemiological studies of many viruses. Here we employed a statistical method to evaluate the evolutionary relationships between individual viral genes and full-length genomes without tree construction as a way to determine which gene can match the genome well in phylogenetic analyses. This method was performed by calculation of linear correlations between the genetic distance matrices of aligned individual gene sequences and aligned genome sequences. We applied this method to the phylogenetic analyses of porcine circovirus 2 (PCV2, measles virus (MV, hepatitis E virus (HEV and Japanese encephalitis virus (JEV. Phylogenetic trees were constructed for comparisons and the possible factors affecting the method accuracy were also discussed in the calculations. The results revealed that this method could produce results consistent with those of previous studies about the proper consensus sequences that could be successfully used as phylogenetic markers. And our results also suggested that these evolutionary correlations could provide useful information for identifying genes that could be used effectively to infer the genetic relationships.
Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article
Gupta, Anju
2013-01-01
This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.
3D spatially-adaptive canonical correlation analysis: Local and global methods.
Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar
2018-04-01
Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.
Empirical source strength correlations for rans-based acoustic analogy methods
Kube-McDowell, Matthew Tyndall
JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate
Zlokazov, E. Yu.; Starikov, R. S.; Odinokov, S. B.; Tsyganov, I. K.; Talalaev, V. E.; Koluchkin, V. V.
Automatic inspection of security hologram (SH) identity is highly demanded issue due high distribution of SH worldwide to protect documents such as passports, driving licenses, banknotes etc. While most of the known approaches use inspection of SH design features none of these approaches inspect the features of its surface relief that is a direct contribution to original master matrix used for these holograms production. In our previous works we represented the device that was developed to provide SH identification by processing of coherent responses of its surface elements. Most of the algorithms used in this device are based on application of correlation pattern recognition methods. The main issue of the present article is a description of these methods application specificities.
Energy Technology Data Exchange (ETDEWEB)
Cruz, Maria das Gracas de Almeida; Penas, Maria Exposito; Fonseca, Lea Mirian Barbosa [Universidade Federal, Rio de Janeiro, RJ (Brazil). Faculdade de Medicina. Dept. de Radiologia-Medicina Nuclear; Lemme, Eponina Maria O. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Faculdade de Medicina. Dept. de Clinia Medica-Gastroenterologia; Martinho, Maria Jose Ribeiro [Hospital Universitario Clementino Fraga Filho, Rio de Janeiro, RJ (Brazil). Servico de Medicina Nuclear
1999-02-01
A group of 97 individuals with typical symptoms of gastroesophageal reflux disease (GERD) was submitted to gastroesophageal reflux scintigraphy (GES) and compared to the results obtained from endoscopy, histopathology and 24 hours pHmetry. Twenty-four healthy individuals were used as a control group and they have done only the GERS. The results obtained showed that: (a) the difference in the reflux index (RI) for the control group and the sick individuals was statistically significant (p < 0.0001); (b) the correlation between GERS and the other methods showed the following results: sensitivity, 84%; specificity, 95%; positive predictive value, 98%; negative predictive value, 67%; accuracy, 87%. We have concluded that the scintigraphic method should be used to confirm the diagnosis of GERD and also recommended as initial investiative procedure. (author)
Kogut, Janusz P.; Tekieli, Marcin
2018-04-01
Non-contact video measurement methods are used to extend the capabilities of standard measurement systems, based on strain gauges or accelerometers. In most cases, they are able to provide more accurate information about the material or construction being tested than traditional sensors, while maintaining a high resolution and measurement stability. With the use of optical methods, it is possible to generate a full field of displacement on the surface of the test sample. The displacement value is the basic (primary) value determined using optical methods, and it is possible to determine the size of the derivative in the form of a sample deformation. This paper presents the application of a non-contact optical method to investigate the deformation of coarse soil material. For this type of soil, it is particularly difficult to obtain basic strength parameters. The use of a non-contact optical method, followed by a digital image correlation (DIC) study of the sample obtained during the tests, effectively completes the description of the behaviour of this type of material.
Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach
Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.
2017-11-01
Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.
Yang, Yang; DeGruttola, Victor
2012-06-22
Traditional resampling-based tests for homogeneity in covariance matrices across multiple groups resample residuals, that is, data centered by group means. These residuals do not share the same second moments when the null hypothesis is false, which makes them difficult to use in the setting of multiple testing. An alternative approach is to resample standardized residuals, data centered by group sample means and standardized by group sample covariance matrices. This approach, however, has been observed to inflate type I error when sample size is small or data are generated from heavy-tailed distributions. We propose to improve this approach by using robust estimation for the first and second moments. We discuss two statistics: the Bartlett statistic and a statistic based on eigen-decomposition of sample covariance matrices. Both statistics can be expressed in terms of standardized errors under the null hypothesis. These methods are extended to test homogeneity in correlation matrices. Using simulation studies, we demonstrate that the robust resampling approach provides comparable or superior performance, relative to traditional approaches, for single testing and reasonable performance for multiple testing. The proposed methods are applied to data collected in an HIV vaccine trial to investigate possible determinants, including vaccine status, vaccine-induced immune response level and viral genotype, of unusual correlation pattern between HIV viral load and CD4 count in newly infected patients.
Energy Technology Data Exchange (ETDEWEB)
McQuinn, Kristen B. W. [University of Texas at Austin, McDonald Observatory, 2515 Speedway, Stop C1400 Austin, TX 78712 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, 116 Church Street, S.E., University of Minnesota, Minneapolis, MN 55455 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 E. Hermans Road, Tucson, AZ 85756 (United States); Berg, Danielle [Center for Gravitation, Cosmology and Astrophysics, Department of Physics, University of Wisconsin Milwaukee, 1900 East Kenwood Boulevard, Milwaukee, WI 53211 (United States); Kennicutt, Robert, E-mail: kmcquinn@astro.as.utexas.edu [Institute for Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom)
2016-07-20
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties.
McQuinn, Kristen. B. W.; Skillman, Evan D.; Dolphin, Andrew E.; Berg, Danielle; Kennicutt, Robert
2016-07-01
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark matter distributions. Accurate distances are fundamental to interpreting observations of these galaxies, yet many of the best studied nearby galaxies have distances based on methods with relatively large uncertainties. We have started a program to derive accurate distances to these galaxies. Here we measure the distance to M51—the Whirlpool galaxy—from newly obtained Hubble Space Telescope optical imaging using the tip of the red giant branch method. We measure the distance modulus to be 8.58 ± 0.10 Mpc (statistical), corresponding to a distance modulus of 29.67 ± 0.02 mag. Our distance is an improvement over previous results as we use a well-calibrated, stable distance indicator, precision photometry in a optimally selected field of view, and a Bayesian Maximum Likelihood technique that reduces measurement uncertainties. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
Analytic processing of distance.
Dopkins, Stephen; Galyer, Darin
2018-01-01
How does a human observer extract from the distance between two frontal points the component corresponding to an axis of a rectangular reference frame? To find out we had participants classify pairs of small circles, varying on the horizontal and vertical axes of a computer screen, in terms of the horizontal distance between them. A response signal controlled response time. The error rate depended on the irrelevant vertical as well as the relevant horizontal distance between the test circles with the relevant distance effect being larger than the irrelevant distance effect. The results implied that the horizontal distance between the test circles was imperfectly extracted from the overall distance between them. The results supported an account, derived from the Exemplar Based Random Walk model (Nosofsky & Palmieri, 1997), under which distance classification is based on the overall distance between the test circles, with relevant distance being extracted from overall distance to the extent that the relevant and irrelevant axes are differentially weighted so as to reduce the contribution of irrelevant distance to overall distance. The results did not support an account, derived from the General Recognition Theory (Ashby & Maddox, 1994), under which distance classification is based on the relevant distance between the test circles, with the irrelevant distance effect arising because a test circle's perceived location on the relevant axis depends on its location on the irrelevant axis, and with relevant distance being extracted from overall distance to the extent that this dependency is absent. Copyright © 2017 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Long Jiao
2015-05-01
Full Text Available The quantitative structure property relationship (QSPR for the boiling point (Tb of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs was investigated. The molecular distance-edge vector (MDEV index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR and artificial neural network (ANN, respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.
A novel acoustic method for gas flow measurement using correlation techniques
Energy Technology Data Exchange (ETDEWEB)
Knuuttila, M. [VTT Chemical Technology, Espoo (Finland). Industrial Physics
1997-12-31
The study demonstrates a new kind of acoustic method for gas flow measurement. The method uses upstream and downstream propagating low frequency plane wave and correlation techniques for volume flow rate determination. The theory of propagating low frequency plane waves in the pipe is introduced and is proved empirically to be applicable for flow measurement. The flow profile dependence of the method is verified and found to be negligible at least in the region of moderate perturbations. The physical principles of the method were applied in practice in the form of a flowmeter with new design concepts. The developed prototype meters were verified against the reference standard of NMI (Nederlands Meetinstituut), which showed that a wide dynamic range of 1:80 is achievable with total expanded uncertainty below 0.3 %. Also the requirements used for turbine meters of linearity, weighted mean error and stability were shown to be well fulfilled. A brief comparison with other flowmeter types shows the new flowmeter to be competitive. The advantages it offers are a small pressure drop over the meter, no blockage of flow in possible malfunction, no pulsation to flow, essentially no moving parts, and the possibility for bidirectional measurements. The introduced flowmeter is also capable of using the telephone network or a radio-modem to read the consumption of gas and report its operation to the user. (orig.) 51 refs.
International Nuclear Information System (INIS)
Oszwaldowski, R; Vazquez, H; Pou, P; Ortega, J; Perez, R; Flores, F
2003-01-01
A new DF-LCAO (density functional with local combination of atomic orbitals) method is used to calculate the electronic properties of 3,4,9,10 perylenetetracarboxylic dianhydride (PTCDA), C 6 H 6 , CH 4 , and CO. The method, called the OO (orbital occupancy) method, is a DF-based theory, which uses the OOs instead of ρ(r) to calculate the exchange and correlation energies. In our calculations, we compare the OO method with the conventional local density approximation approach. Our results show that, using a minimal basis set, we obtain equilibrium bond lengths and binding energies for PTCDA, C 6 H 6 , and CH 4 which are respectively within 6, and 10-15% of the experimental values. We have also calculated the affinity and ionization levels, as well as the optical gap, for benzene and PTCDA and have found that a variant of Koopmans' theorem works well for these molecules. Using this theorem we calculate the Koopmans relaxation energies of the σ- and π-orbitals for PTCDA and have obtained this molecule's density of states which compares well with experimental evidence
In-die photomask registration and overlay metrology with PROVE using 2D correlation methods
Seidel, D.; Arnz, M.; Beyer, D.
2011-11-01
According to the ITRS roadmap, semiconductor industry drives the 193nm lithography to its limits, using techniques like double exposure, double patterning, mask-source optimization and inverse lithography. For photomask metrology this translates to full in-die measurement capability for registration and critical dimension together with challenging specifications for repeatability and accuracy. Especially, overlay becomes more and more critical and must be ensured on every die. For this, Carl Zeiss SMS has developed the next generation photomask registration and overlay metrology tool PROVE® which serves the 32nm node and below and which is already well established in the market. PROVE® features highly stable hardware components for the stage and environmental control. To ensure in-die measurement capability, sophisticated image analysis methods based on 2D correlations have been developed. In this paper we demonstrate the in-die capability of PROVE® and present corresponding measurement results for shortterm and long-term measurements as well as the attainable accuracy for feature sizes down to 85nm using different illumination modes and mask types. Standard measurement methods based on threshold criteria are compared with the new 2D correlation methods to demonstrate the performance gain of the latter. In addition, mask-to-mask overlay results of typical box-in-frame structures down to 200nm feature size are presented. It is shown, that from overlay measurements a reproducibility budget can be derived that takes into account stage, image analysis and global effects like mask loading and environmental control. The parts of the budget are quantified from measurement results to identify critical error contributions and to focus on the corresponding improvement strategies.
Correlation methods in optical metrology with state-of-the-art x-ray mirrors
Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.
2018-01-01
The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.
Power Distance and Verbal Index in Kazakh Business Discourse
Directory of Open Access Journals (Sweden)
Buadat Karibayeva
2017-01-01
Full Text Available Kazakh business discourse is a relatively new area for research, and hence many of the cultural preferences are yet to be explored. This paper focuses on measuring Hofstede’s power distance index for Kazakh culture. A novel technique is proposed, where verbal index is calculated from analysis of publically available texts delivered by representatives of different cultures. In particular, we analyzed public speeches delivered by leaders ofNew Zealand,UK,Germany,Australia,USA,Greece,China,India, andKazakhstan. From these texts we derived a verbal index, which closely correlated to Hofstede’s power distance data. As a result, we were able to obtain a power distance index of 58 forKazakhstan, which was previously unavailable in literature. Furthermore, this method can be used as a cheaper alternative to conducting surveys in estimating Hofstede’s power distance indexes for different cultures.
Deza, Michel Marie
2016-01-01
This 4th edition of the leading reference volume on distance metrics is characterized by updated and rewritten sections on some items suggested by experts and readers, as well a general streamlining of content and the addition of essential new topics. Though the structure remains unchanged, the new edition also explores recent advances in the use of distances and metrics for e.g. generalized distances, probability theory, graph theory, coding theory, data analysis. New topics in the purely mathematical sections include e.g. the Vitanyi multiset-metric, algebraic point-conic distance, triangular ratio metric, Rossi-Hamming metric, Taneja distance, spectral semimetric between graphs, channel metrization, and Maryland bridge distance. The multidisciplinary sections have also been supplemented with new topics, including: dynamic time wrapping distance, memory distance, allometry, atmospheric depth, elliptic orbit distance, VLBI distance measurements, the astronomical system of units, and walkability distance. Lea...
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
International Nuclear Information System (INIS)
Faerman, V A; Avramchuk, V S; Luneva, E E
2014-01-01
In this paper an overview of useful signal detection methods on the background of intense noise and limits determination methods of useful signal is presented. The following features are considered: peculiarities of usage of correlation analysis, cross-amplitude spectrum, coherence function, cross-phase spectrum, time-frequency correlation function in case of frequency limits determination as well as leaks detection in pipelines. The possibility of using time-frequency correlation function for solving above named issues is described. Time- frequency correlation function provides information about the signals correlation for each of the investigated frequency bands. Data about location of peaks on the surface plot of a time- frequency correlation function allows making an assumption about the spectral composition of useful signal and its frequency boundaries
Xu, Xiankun; Li, Peiwen
2017-11-01
Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.
Gatti, M.; Vielzeuf, P.; Davis, C.; Cawthon, R.; Rau, M. M.; DeRose, J.; De Vicente, J.; Alarcon, A.; Rozo, E.; Gaztanaga, E.; Hoyle, B.; Miquel, R.; Bernstein, G. M.; Bonnett, C.; Carnero Rosell, A.; Castander, F. J.; Chang, C.; da Costa, L. N.; Gruen, D.; Gschwend, J.; Hartley, W. G.; Lin, H.; MacCrann, N.; Maia, M. A. G.; Ogando, R. L. C.; Roodman, A.; Sevilla-Noarbe, I.; Troxel, M. A.; Wechsler, R. H.; Asorey, J.; Davis, T. M.; Glazebrook, K.; Hinton, S. R.; Lewis, G.; Lidman, C.; Macaulay, E.; Möller, A.; O'Neill, C. R.; Sommer, N. E.; Uddin, S. A.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Allam, S.; Annis, J.; Bechtol, K.; Brooks, D.; Burke, D. L.; Carollo, D.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; DePoy, D. L.; Desai, S.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Hoormann, J. K.; Jain, B.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Li, T. S.; Lima, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Reil, K.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sheldon, E.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, B. E.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.
2018-06-01
We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing source galaxies from the Dark Energy Survey Year 1 sample with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-z methods applied to the same source galaxy sample. We apply the method to two photo-z codes run in our simulated data: Bayesian Photometric Redshift and Directional Neighbourhood Fitting. We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering versus photo-zs. The systematic uncertainty in the mean redshift bias of the source galaxy sample is Δz ≲ 0.02, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.
A Multi-Objective Partition Method for Marine Sensor Networks Based on Degree of Event Correlation
Directory of Open Access Journals (Sweden)
Dongmei Huang
2017-09-01
Full Text Available Existing marine sensor networks acquire data from sea areas that are geographically divided, and store the data independently in their affiliated sea area data centers. In the case of marine events across multiple sea areas, the current network structure needs to retrieve data from multiple data centers, and thus severely affects real-time decision making. In this study, in order to provide a fast data retrieval service for a marine sensor network, we use all the marine sensors as the vertices, establish the edge based on marine events, and abstract the marine sensor network as a graph. Then, we construct a multi-objective balanced partition method to partition the abstract graph into multiple regions and store them in the cloud computing platform. This method effectively increases the correlation of the sensors and decreases the retrieval cost. On this basis, an incremental optimization strategy is designed to dynamically optimize existing partitions when new sensors are added into the network. Experimental results show that the proposed method can achieve the optimal layout for distributed storage in the process of disaster data retrieval in the China Sea area, and effectively optimize the result of partitions when new buoys are deployed, which eventually will provide efficient data access service for marine events.
A Multi-Objective Partition Method for Marine Sensor Networks Based on Degree of Event Correlation.
Huang, Dongmei; Xu, Chenyixuan; Zhao, Danfeng; Song, Wei; He, Qi
2017-09-21
Existing marine sensor networks acquire data from sea areas that are geographically divided, and store the data independently in their affiliated sea area data centers. In the case of marine events across multiple sea areas, the current network structure needs to retrieve data from multiple data centers, and thus severely affects real-time decision making. In this study, in order to provide a fast data retrieval service for a marine sensor network, we use all the marine sensors as the vertices, establish the edge based on marine events, and abstract the marine sensor network as a graph. Then, we construct a multi-objective balanced partition method to partition the abstract graph into multiple regions and store them in the cloud computing platform. This method effectively increases the correlation of the sensors and decreases the retrieval cost. On this basis, an incremental optimization strategy is designed to dynamically optimize existing partitions when new sensors are added into the network. Experimental results show that the proposed method can achieve the optimal layout for distributed storage in the process of disaster data retrieval in the China Sea area, and effectively optimize the result of partitions when new buoys are deployed, which eventually will provide efficient data access service for marine events.
Correlated Random Systems Five Different Methods : CIRM Jean-Morlet Chair
Kistler, Nicola
2015-01-01
This volume presents five different methods recently developed to tackle the large scale behavior of highly correlated random systems, such as spin glasses, random polymers, local times and loop soups and random matrices. These methods, presented in a series of lectures delivered within the Jean-Morlet initiative (Spring 2013), play a fundamental role in the current development of probability theory and statistical mechanics. The lectures were: Random Polymers by E. Bolthausen, Spontaneous Replica Symmetry Breaking and Interpolation Methods by F. Guerra, Derrida's Random Energy Models by N. Kistler, Isomorphism Theorems by J. Rosen and Spectral Properties of Wigner Matrices by B. Schlein. This book is the first in a co-edition between the Jean-Morlet Chair at CIRM and the Springer Lecture Notes in Mathematics which aims to collect together courses and lectures on cutting-edge subjects given during the term of the Jean-Morlet Chair, as well as new material produced in its wake. It is targeted at researchers, i...
Hankel Matrix Correlation Function-Based Subspace Identification Method for UAV Servo System
Directory of Open Access Journals (Sweden)
Minghong She
2018-01-01
Full Text Available For the identification problem of closed-loop subspace model, we propose a zero space projection method based on the estimation of correlation function to fill the block Hankel matrix of identification model by combining the linear algebra with geometry. By using the same projection of related data in time offset set and LQ decomposition, the multiplication operation of projection is achieved and dynamics estimation of the unknown equipment system model is obtained. Consequently, we have solved the problem of biased estimation caused when the open-loop subspace identification algorithm is applied to the closed-loop identification. A simulation example is given to show the effectiveness of the proposed approach. In final, the practicability of the identification algorithm is verified by hardware test of UAV servo system in real environment.
Correlation of Geophysical and Geotechnical Methods for Sediment Mapping in Sungai Batu, Kedah
Zakaria, M. T.; Taib, A.; Saidin, M. M.; Saad, R.; Muztaza, N. M.; Masnan, S. S. K.
2018-04-01
Exploration geophysics is widely used to map the subsurface characteristics of a region, to understand the underlying rock structures and spatial distribution of rock units. 2-D resistivity and seismic refraction methods were conducted in Sungai Batu locality with objective to identify and map the sediment deposit with correlation of borehole record. 2-D resistivity data was acquire using ABEM SAS4000 system with Pole-dipole array and 2.5 m minimum electrode spacing while for seismic refraction ABEM MK8 seismograph was used to record the seismic data and 5 kg sledgehammer used as a seismic source with geophones interval of 5 m spacing. The inversion model of 2-D resistivity result shows that, the resistivity values 500 Ωm as the hard layer for this study area. The seismic result indicates that the velocity values 3600 m/s interpreted as the hard layer in this locality.
Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen
2010-12-01
Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Wang Hao
2010-01-01
Full Text Available Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.
Ojukwu, Chidiebele Petronilla; Anyanwu, Godson Emeka; Anekwu, Emelie Morris; Chukwu, Sylvester Caesar; Fab-Agbo, Chukwubuikem
2017-10-01
Infant carrying is an integral part of the mothering occupation. Paucity of data exists on its correlates and associated musculoskeletal injuries. In this study, factors and musculoskeletal injuries associated with infant carrying were investigated in 227 nursing mothers, using a structured questionnaire. 77.1% utilised the back infant carrying methods (ICM). Maternal comfort was the major factor influencing participants' (37.4%) choices of ICMs. Infant's age (p = .000) and transportation means (p = .045) were significantly associated with ICMs. Low back pain (82.8%) and upper back pain (74.9%) were the most reported musculoskeletal discomforts associated with ICMs, especially among women who utilised back ICM. Back ICM is predominantly used by nursing mothers. Impact statement Infant carrying has been associated with increased energy cost and biomechanical changes. Currently, there is a paucity of data on infant carrying-related musculoskeletal injuries. In this study, investigating factors and musculoskeletal injuries associated with infant carrying, the results showed that back infant carrying method is predominantly used by nursing mothers. Age of the infant and mothers' means of transportation were determinant factors of infant carrying methods. Among the several reported infant carrying-related musculoskeletal disorders, low back and upper back pain were the most prevalent, especially among women who utilised the back infant carrying method. There is need for women's health specialists to introduce appropriate ergonomic training and interventions on infant carrying tasks in order to improve maternal musculoskeletal health during the childbearing years and beyond. Further experimental studies on the effects of various infant carrying methods on the musculoskeletal system are recommended.
A simple method for identifying parameter correlations in partially observed linear dynamic models.
Li, Pu; Vu, Quoc Dong
2015-12-14
Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a
Acute and chronic alcohol use correlated with methods of suicide in a Swiss national sample.
Pfeifer, P; Bartsch, C; Hemmer, A; Reisch, T
2017-09-01
Chronic and acute alcohol use are highly associated risk factors for suicides worldwide. Therefore, we examined suicide cases with and without alcohol use disorder (AUD) using data from the SNSF project "Suicide in Switzerland: A detailed national survey". Our investigations focus on correlations between acute and chronic alcohol use with reference to suicide and potential interactions with the methods of suicide. We used data from the SNSF project in which all cases of registered completed suicide in Switzerland reported to any of the seven Swiss institutes of legal and forensic medicine between 2000 and 2010 were collected. We extracted cases that were tested for blood alcohol to use in our analysis. We compared clinical characteristics, blood alcohol concentrations, and methods of suicide in cases with and without AUD. Out of 6497 cases, 2946 subjects were tested for acute alcohol use and included in our analysis. Of the latter, 366 (12.4%) persons had a medical history of AUD. Subjects with AUD significantly had higher blood alcohol concentrations and were more often in medical treatment before suicide. Drug intoxication as method of suicide was more frequent in cases with AUD compared to NAUD. Overall, we found a high incidence of acute alcohol use at the time of death in chronic alcohol misusers (AUD). The five methods of suicide most commonly used in Switzerland differed considerably between individuals with and without AUD. Blood alcohol concentrations varied across different methods of suicide independently from the medical history in both groups. Copyright © 2017 Elsevier B.V. All rights reserved.
Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation
De La Garza Martinez, Pablo
2016-05-01
Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.
Pore Network Modeling: Alternative Methods to Account for Trapping and Spatial Correlation
De La Garza Martinez, Pablo
2016-01-01
Pore network models have served as a predictive tool for soil and rock properties with a broad range of applications, particularly in oil recovery, geothermal energy from underground reservoirs, and pollutant transport in soils and aquifers [39]. They rely on the representation of the void space within porous materials as a network of interconnected pores with idealised geometries. Typically, a two-phase flow simulation of a drainage (or imbibition) process is employed, and by averaging the physical properties at the pore scale, macroscopic parameters such as capillary pressure and relative permeability can be estimated. One of the most demanding tasks in these models is to include the possibility of fluids to remain trapped inside the pore space. In this work I proposed a trapping rule which uses the information of neighboring pores instead of a search algorithm. This approximation reduces the simulation time significantly and does not perturb the accuracy of results. Additionally, I included spatial correlation to generate the pore sizes using a matrix decomposition method. Results show higher relative permeabilities and smaller values for irreducible saturation, which emphasizes the effects of ignoring the intrinsic correlation seen in pore sizes from actual porous media. Finally, I implemented the algorithm from Raoof et al. (2010) [38] to generate the topology of a Fontainebleau sandstone by solving an optimization problem using the steepest descent algorithm with a stochastic approximation for the gradient. A drainage simulation is performed on this representative network and relative permeability is compared with published results. The limitations of this algorithm are discussed and other methods are suggested to create a more faithful representation of the pore space.
Distance collaborations with industry
Energy Technology Data Exchange (ETDEWEB)
Peskin, A.; Swyler, K.
1998-06-01
The college industry relationship has been identified as a key policy issue in Engineering Education. Collaborations between academic institutions and the industrial sector have a long history and a bright future. For Engineering and Engineering Technology programs in particular, industry has played a crucial role in many areas including advisement, financial support, and practical training of both faculty and students. Among the most important and intimate interactions are collaborative projects and formal cooperative education arrangements. Most recently, such collaborations have taken on a new dimension, as advances in technology have made possible meaningful technical collaboration at a distance. There are several obvious technology areas that have contributed significantly to this trend. Foremost is the ubiquitous presence of the Internet. Perhaps almost as important are advances in computer based imaging. Because visual images offer a compelling user experience, it affords greater knowledge transfer efficiency than other modes of delivery. Furthermore, the quality of the image appears to have a strongly correlated effect on insight. A good visualization facility offers both a means for communication and a shared information space for the subjects, which are among the essential features of both peer collaboration and distance learning.
Directory of Open Access Journals (Sweden)
Mehmet Arif Özyazıcı
2015-11-01
Full Text Available The aim of this study was to determine plant nutrients content and to in terms of soil variables their soil database and generate maps of their distribution on agricultural land in Central and Eastern Black Sea Region using geographical information system (GIS. In this research, total 3400 soil samples (0-20 cm depth were taken at 2.5 x 2.5 km grid points representing agricultural soils. Total nitrogen, extractable calcium, magnesium, sodium, boron, iron, copper, zinc and manganese contents were analysed in collected soil samples. Analysis results of these samples were classified and evaluated for deficiency, sufficiency or excess with respect to plant nutrients. Afterwards, in terms of GIS, a soil database and maps for current status of the study area were created by using inverse distance weighted (IDW interpolation method. According to this research results, it was determined sufficient plant nutrient elements in terms of total nitrogen, extractable iron, copper and manganese in arable soils of Central and Eastern Blacksea Region while, extractable calcium, magnesium, sodium were found good and moderate level in 66.88%, 81.44% and 64.56% of total soil samples, respectively. In addition, insufficient boron and zinc concentration were found in 34.35% and 51.36% of soil samples, respectively.
Equivalence of massive propagator distance and mathematical distance on graphs
International Nuclear Information System (INIS)
Filk, T.
1992-01-01
It is shown in this paper that the assignment of distance according to the massive propagator method and according to the mathematical definition (length of minimal path) on arbitrary graphs with a bound on the degree leads to equivalent large scale properties of the graph. Especially, the internal scaling dimension is the same for both definitions. This result holds for any fixed, non-vanishing mass, so that a really inequivalent definition of distance requires the limit m → 0
Oliveira, S.C.; Slot, D.E.; Celeste, R.K.; Abegg, C.; Keijser, B.J.F.; Weijden, F.A. van der
2015-01-01
Aim To evaluate the correlation between bleeding on marginal probing (BOMP) and bleeding on pocket probing (BOPP), and the correlation of both bleeding indices with plaque. Materials and Methods This cross-sectional study screened 336 participants, from which 268 were eligible for examination and
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Language distance and tree reconstruction
International Nuclear Information System (INIS)
Petroni, Filippo; Serva, Maurizio
2008-01-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others
Language distance and tree reconstruction
Petroni, Filippo; Serva, Maurizio
2008-08-01
Languages evolve over time according to a process in which reproduction, mutation and extinction are all possible. This is very similar to haploid evolution for asexual organisms and for the mitochondrial DNA of complex ones. Exploiting this similarity, it is possible, in principle, to verify hypotheses concerning the relationship among languages and to reconstruct their family tree. The key point is the definition of the distances among pairs of languages in analogy with the genetic distances among pairs of organisms. Distances can be evaluated by comparing grammar and/or vocabulary, but while it is difficult, if not impossible, to quantify grammar distance, it is possible to measure a distance from vocabulary differences. The method used by glottochronology computes distances from the percentage of shared 'cognates', which are words with a common historical origin. The weak point of this method is that subjective judgment plays a significant role. Here we define the distance of two languages by considering a renormalized edit distance among words with the same meaning and averaging over the two hundred words contained in a Swadesh list. In our approach the vocabulary of a language is the analogue of DNA for organisms. The advantage is that we avoid subjectivity and, furthermore, reproducibility of results is guaranteed. We apply our method to the Indo-European and the Austronesian groups, considering, in both cases, fifty different languages. The two trees obtained are, in many respects, similar to those found by glottochronologists, with some important differences as regards the positions of a few languages. In order to support these different results we separately analyze the structure of the distances of these languages with respect to all the others.
Training for Distance Teaching through Distance Learning.
Cadorath, Jill; Harris, Simon; Encinas, Fatima
2002-01-01
Describes a mixed-mode bachelor degree course in English language teaching at the Universidad Autonoma de Puebla (Mexico) that was designed to help practicing teachers write appropriate distance education materials by giving them the experience of being distance students. Includes a course outline and results of a course evaluation. (Author/LRW)
International Nuclear Information System (INIS)
Pevsner, A.; Davis, B.; Joshi, S.; Hertanto, A.; Mechalakos, J.; Yorke, E.; Rosenzweig, K.; Nehmeh, S.; Erdi, Y.E.; Humm, J.L.; Larson, S.; Ling, C.C.; Mageras, G.S.
2006-01-01
We have evaluated an automated registration procedure for predicting tumor and lung deformation based on CT images of the thorax obtained at different respiration phases. The method uses a viscous fluid model of tissue deformation to map voxels from one CT dataset to another. To validate the deformable matching algorithm we used a respiration-correlated CT protocol to acquire images at different phases of the respiratory cycle for six patients with nonsmall cell lung carcinoma. The position and shape of the deformable gross tumor volumes (GTV) at the end-inhale (EI) phase predicted by the algorithm was compared to those drawn by four observers. To minimize interobserver differences, all observers used the contours drawn by a single observer at end-exhale (EE) phase as a guideline to outline GTV contours at EI. The differences between model-predicted and observer-drawn GTV surfaces at EI, as well as differences between structures delineated by observers at EI (interobserver variations) were evaluated using a contour comparison algorithm written for this purpose, which determined the distance between the two surfaces along different directions. The mean and 90% confidence interval for model-predicted versus observer-drawn GTV surface differences over all patients and all directions were 2.6 and 5.1 mm, respectively, whereas the mean and 90% confidence interval for interobserver differences were 2.1 and 3.7 mm. We have also evaluated the algorithm's ability to predict normal tissue deformations by examining the three-dimensional (3-D) vector displacement of 41 landmarks placed by each observer at bronchial and vascular branch points in the lung between the EE and EI image sets (mean and 90% confidence interval displacements of 11.7 and 25.1 mm, respectively). The mean and 90% confidence interval discrepancy between model-predicted and observer-determined landmark displacements over all patients were 2.9 and 7.3 mm, whereas interobserver discrepancies were 2.8 and 6
Tubman, Norm; Whaley, Birgitta
The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.
Directory of Open Access Journals (Sweden)
Faming Zhang
2016-11-01
Full Text Available The prediction of travel times is challenging because of the sparseness of real-time traffic data and the intrinsic uncertainty of travel on congested urban road networks. We propose a new gradient–boosted regression tree method to accurately predict travel times. This model accounts for spatiotemporal correlations extracted from historical and real-time traffic data for adjacent and target links. This method can deliver high prediction accuracy by combining simple regression trees with poor performance. It corrects the error found in existing models for improved prediction accuracy. Our spatiotemporal gradient–boosted regression tree model was verified in experiments. The training data were obtained from big data reflecting historic traffic conditions collected by probe vehicles in Wuhan from January to May 2014. Real-time data were extracted from 11 weeks of GPS records collected in Wuhan from 5 May 2014 to 20 July 2014. Based on these data, we predicted link travel time for the period from 21 July 2014 to 25 July 2014. Experiments showed that our proposed spatiotemporal gradient–boosted regression tree model obtained better results than gradient boosting, random forest, or autoregressive integrated moving average approaches. Furthermore, these results indicate the advantages of our model for urban link travel time prediction.
The generalized correlation method for estimation of time delay in power plants
International Nuclear Information System (INIS)
Kostic, Lj.
1981-01-01
The generalized correlation estimation is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise in a power plant. This estimator can be realized as a pair of receiver prefilters followed by a cross correlator. The time argument at which the correlator achieves a maximum is the delay estimate. (author)
Method for correlating weathering data on adsorbents used for the removal of CH3I
International Nuclear Information System (INIS)
Parish, H.C.; Muhlenhaupt, R.C.
1976-01-01
Traditionally, weathering data have been expressed in terms of removal efficiency as a function of time for a select number of bed depths. Variables include: (1) variations in the efficiency of the adsorbent when new; (2) guard bed design (bed depth, replacement schedule); (3) air quality; and (4) air velocity. This paper discusses the development of a single parameter, namely, the Effective Weathering Rate (EWR), its usefulness in correlating weathering data, and subsequent utilization during design. The effectiveness of the model was checked by analyzing several sets of data. Values of EWR were determined from the limited available experimental data for air which could be described only qualitatively as ranging from relatively clean to relatively dirty air from a heavily industrialized area. As quantitative data become available, it is expected that the EWR can be correlated with the character and concentration of air contaminants such as hydrocarbons, SO 2 , NO/sub x/ and O 3 . The EWR, determined from experimental data, can be used to predict adsorber efficiencies for conditions corresponding to a specific application. Some of the changes in conditions which can be treated by this method include: (1) bed depth of adsorbent, (2) addition of guard bed or depth of guard bed, (3) effectiveness of the carbon (new), (4) operating time, and (5) air velocity. Someparametric studies were performed analytically to demonstrate some applications of the weathering model. These included calculating the adsorber efficiency as a function of time and bed depth, for carbons having various efficiencies when new, and for guard beds of variable bed depth and having different replacement schedules for the guard bed material
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Cao, Guangxi; Han, Yan; Chen, Yuemeng; Yang, Chunxia
2014-05-01
Based on the daily price data of Shanghai and London gold spot markets, we applied detrended cross-correlation analysis (DCCA) and detrended moving average cross-correlation analysis (DMCA) methods to quantify power-law cross-correlation between domestic and international gold markets. Results show that the cross-correlations between the Chinese domestic and international gold spot markets are multifractal. Furthermore, forward DMCA and backward DMCA seems to outperform DCCA and centered DMCA for short-range gold series, which confirms the comparison results of short-range artificial data in L. Y. He and S. P. Chen [Physica A 390 (2011) 3806-3814]. Finally, we analyzed the local multifractal characteristics of the cross-correlation between Chinese domestic and international gold markets. We show that multifractal characteristics of the cross-correlation between the Chinese domestic and international gold markets are time-varying and that multifractal characteristics were strengthened by the financial crisis in 2007-2008.
Deza, Michel Marie
2014-01-01
This updated and revised third edition of the leading reference volume on distance metrics includes new items from very active research areas in the use of distances and metrics such as geometry, graph theory, probability theory and analysis. Among the new topics included are, for example, polyhedral metric space, nearness matrix problems, distances between belief assignments, distance-related animal settings, diamond-cutting distances, natural units of length, Heidegger’s de-severance distance, and brain distances. The publication of this volume coincides with intensifying research efforts into metric spaces and especially distance design for applications. Accurate metrics have become a crucial goal in computational biology, image analysis, speech recognition and information retrieval. Leaving aside the practical questions that arise during the selection of a ‘good’ distance function, this work focuses on providing the research community with an invaluable comprehensive listing of the main available di...
International Nuclear Information System (INIS)
Bordbar, Mohammad Hadi; Hyppänen, Timo
2015-01-01
A set of correlations for direct exchange area (DEA) between zones are presented. The correlations are simpler and much faster than the classical method used for DEA calculations in zone method. Additionally a unique form of correlation supports both singular and non-singular DEA calculation and no extra effort for non-singular cases is needed. Using the new correlations, the correlation based zone method (CBZM) is introduced and validated by several benchmarks. The CBZM results were in excellent agreement with the benchmark solutions. As an application case, by using the CZBM the gray and non-gray radiative heat transfer has been analyzed in a large back pass channel of a CFB boiler for the case of air and oxygen-fired combustion scenarios. The effect of the spectral radiative behavior of combustion gases on the predicted radiative heat fluxes on the walls is addressed. The effect of combustion scenario on the operation of the unit is also discussed. - Highlights: • Efficient correlations for DEA calculation are presented. • The gray and non-gray correlation based zone method is introduced. • The model is validated against several 3D benchmarks. • The effect of non-gray radiation in a large scale back pass channel is addressed. • The effect of combustion scenario on radiation in back pass channel is reported
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant
Directory of Open Access Journals (Sweden)
Silvia Regina Cavani Jorge Santos
2015-06-01
Full Text Available A bioanalytical method was developed and applied to quantify the free imipenem concentrations for pharmacokinetics and PK/PD correlation studies of the dose adjustments required to maintain antimicrobial effectiveness in pediatric burn patients. A reverse-phase Supelcosil LC18 column (250 x 4.6 mm 5 micra, binary mobile phase consisting of 0.01 M, pH 7.0 phosphate buffer and acetonitrile (99:1, v/v, flow rate of 0.8 mL/min, was applied. The method showed good absolute recovery (above 90%, good linearity (0.25-100.0 µg/mL, r2=0.999, good sensitivity (LLOQ: 0.25 µg/mL; LLOD: 0.12 µg/mL and acceptable stability. Inter/intraday precision values were 7.3/5.9%, and mean accuracy was 92.9%. A bioanalytical method was applied to quantify free drug concentrations in children with burns. Six pediatric burn patients (median 7.0 years old, 27.5 kg, normal renal function, and 33% total burn surface area were prospectively investigated; inhalation injuries were present in 4/6 (67% of the patients. Plasma monitoring and PK assessments were performed using a serial blood sample collection for each set, totaling 10 sets. The PK/PD target attained (40%T>MIC for each minimum inhibitory concentration (MIC: 0.5, 1.0, 2.0, 4.0 mg/L occurred at a percentage higher than 80% of the sets investigated and 100% after dose adjustment. In conclusion, the purification of plasma samples using an ultrafiltration technique followed by quantification of imipenem plasma measurements using the LC method is quite simple, useful, and requires small volumes for blood sampling. In addition, a small amount of plasma (0.25 mL is needed to guarantee drug effectiveness in pediatric burn patients. There is also a low risk of neurotoxicity, which is important because pharmacokinetics are unpredictable in these critical patients with severe hospital infection. Finally, the PK/PD target was attained for imipenem in the control of sepsis in pediatric patients with burns.
International Nuclear Information System (INIS)
Zhang Songbai; Wu Jun; Zhu Jianyu; Tian Dongfeng; Xie Dong
2011-01-01
The active methodology of time correlation coincidence measurement of neutron is an effective verification means to authenticate uranium metal. A collimated 252 Cf neutron source was used to investigate mass and enrichment of uranium metal through the neutron transport simulation for different enrichments and different masses of uranium metal, then time correlation coincidence counts of them were obtained. By analyzing the characteristic of time correlation coincidence counts, the monotone relationships were founded between FWTH of time correlation coincidence and multiplication factor, between the total coincidence counts in FWTH for time correlation coincidence and mass of 235 U multiplied by multiplication factor, and between the ratio of neutron source penetration and mass of uranium metal. Thus the methodology to authenticate mass and enrichment of uranium metal was established with time correlation coincidence by active neutron investigation. (authors)
Pan, Bing; Wang, Bo
2017-10-01
Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.
Cabibi, Daniela; Calvaruso, Vincenza; Giuffrida, Letizia; Ingrao, Sabrina; Balsamo, Laura; Giannone, Antonino Giulio; Petta, Salvatore; Di Marco, Vito
2015-03-06
To compare Masson's trichrome (MT), Sirius red (SR) and orcein staining in acute hepatitis (AH) and to correlate them with transient elastography (TE), a noninvasive method to assess hepatic fibrosis. We evaluated liver stiffness by TE in a cohort of 34 consecutive patients and assessed MT-, SR- and orcein-stained biopsies using the METAVIR scoring system and digital image analysis (DIA). MT and SR both showed severe fibrosis (stage III-IV, DIA = 12.7%). Orcein showed absent or mild fibrosis (stage 0-II, DIA = 4.4%; p 12.5 kPa, in keeping with SR/MT but not with orcein results. Even though in AH true elastic fibrosis is typically absent or mild, TE shows elevated stiffness values, in keeping with SR/MT evaluations. If not properly evaluated in the clinical context, these results would lead to an overestimation of fibrosis. Orcein is the only staining able to evidence the absence of true elastic fibrosis, which is a typical feature of AH. This is the first study comparing different staining procedures performed on AH biopsies by DIA versus TE. © 2015 S. Karger AG, Basel.
Investigation of Rock Failure Pattern in Creep by Digital Speckle Correlation Method
Directory of Open Access Journals (Sweden)
Yunliang Tan
2013-01-01
Full Text Available In order to study the mechanical characteristics from creep deformation to failure of rock, the tests of uniaxial compression and pushing steel-plate anchored in rock were performed, by using RLJW-2000 servo test synchronizing with Digital Speckle Correlation Method (DSCM. The investigations showed that for a uniaxial compressive specimen, when load arrived at 0.5σc, displacement clusters orderly formed, which was ahead of the macrocreep strain occurring in a slight jump mode when load arrived at 0.7σc. When the load level arrived at 0.8σc, displacement clusters gathered to be a narrow band. After that, the specimen abruptly fractured in a shear mode. In the creep pushing steel-plate test, when pushing force arrived at 25 kN, crack began to occur, the horizontal displacement field as well as shear strain field concentrated continuously along the interface between steel-plate and rock, and a new narrow concentrating band gathered in the upper layer. When pushing force arrived at 27.5 kN, another new narrow shear deformation band formed in the lower layer. Then, the steel-plate was pushed out quickly accompanying strong creep deformation.
Heckman, Vanessa; Kohler, Monica; Heaton, Thomas
2011-01-01
Automated damage detection methods have application to instrumented structures that are susceptible to types of damage that are difficult or costly to detect. The presented method has application to the detection of brittle fracture of welded beam-column connections in steel moment-resisting frames (MRFs), where locations of potential structural damage are known a priori. The method makes use of a prerecorded catalog of Green’s function templates and a cross-correlation method ...
Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials
Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.
2018-01-01
The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.
Directory of Open Access Journals (Sweden)
Alexov Emil G
2006-11-01
Full Text Available Abstract Background Predicting residues' contacts using primary amino acid sequence alone is an important task that can guide 3D structure modeling and can verify the quality of the predicted 3D structures. The correlated mutations (CM method serves as the most promising approach and it has been used to predict amino acids pairs that are distant in the primary sequence but form contacts in the native 3D structure of homologous proteins. Results Here we report a new implementation of the CM method with an added set of selection rules (filters. The parameters of the algorithm were optimized against fifteen high resolution crystal structures with optimization criterion that maximized the confidentiality of the predictions. The optimization resulted in a true positive ratio (TPR of 0.08 for the CM without filters and a TPR of 0.14 for the CM with filters. The protocol was further benchmarked against 65 high resolution structures that were not included in the optimization test. The benchmarking resulted in a TPR of 0.07 for the CM without filters and to a TPR of 0.09 for the CM with filters. Conclusion Thus, the inclusion of selection rules resulted to an overall improvement of 30%. In addition, the pair-wise comparison of TPR for each protein without and with filters resulted in an average improvement of 1.7. The methodology was implemented into a web server http://www.ces.clemson.edu/compbio/recon that is freely available to the public. The purpose of this implementation is to provide the 3D structure predictors with a tool that can help with ranking alternative models by satisfying the largest number of predicted contacts, as well as it can provide a confidence score for contacts in cases where structure is known.
Lanki, Timo; Alm, Sari; Ruuskanen, Juhani; Janssen, Nicole A H; Jantunen, Matti; Pekkanen, Juha
2002-05-01
There is evidence that hourly variations in exposure to airborne particulate matter (PM) may be associated with adverse health effects. Still there are only few published data on short-term levels of personal exposure to PM in community settings. The objectives of the study were to assess hourly and shorter-term variations in personal PM(2.5) exposure in Helsinki, Finland, and to compare results from portable photometers to simultaneously measured gravimetric concentrations. The effect of relative humidity on the photometric results was also evaluated. Personal PM(2.5) exposures of elderly persons were assessed for 24 h every second week, resulting in 308 successful measurements from 47 different subjects. Large changes in concentrations in minutes after cooking or changing microenvironment were seen. The median of daily 1-h maxima was over twice the median of 24-h averages. There was a strong significant association between the two means, which was not linear. Median (95th percentile) of the photometric 24-h concentrations was 12.1 (37.7) and of the 24-h gravimetric concentrations 9.2 (21.3) microg/m3. The correlation between the photometric and the gravimetric method was quite good (R2=0.86). Participants spent 94.1% of their time indoors or in a vehicle, where relative humidity is usually low and thus not likely to cause significant effects on photometric results. Even outdoors, the relative humidity had only modest effect on concentrations. Photometers are a promising method to explore the health effects of short-term variation in personal PM(2.5) exposure.
Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo
2017-08-01
The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.
Radioguided breast surgery for occult lesion localization – correlation between two methods
Directory of Open Access Journals (Sweden)
Gutfilen Bianca
2008-08-01
Full Text Available Abstract Background The detection of sub-clinical breast lesions has increased with screening mammography. Biopsy techniques can offer precision and agility in its execution, as well as patient comfort. This trial compares radioguided occult lesion localization (ROLL and wire-guided localization (WL of breast lesions. We investigate if a procedure at the ambulatorial level (ROLL could lead to a better aesthetic result and less postoperative pain. In addition, we intend to demonstrate the efficacy of radioguided localization and removal of occult breast lesions using radiopharmaceuticals injected directly into the lesions and correlate radiological and histopathological findings. Methods One hundred and twenty patients were randomized into two groups (59 WL and 61 ROLL. The patients were requested to score the cosmetic appearance of their breast after surgery, and a numerical rating scale was used to measure pain on the first postoperative day. Clearance margins were considered at ≥ 10 mm for invasive cancer, ≥ 5 mm for ductal carcinoma in situ, and ≥ 1 mm for benign disease. Patients were subsequently treated according to the definitive histological result. When appropriate, different statistical tests were used in order to test the significance between the two groups, considering a P value Results WL and ROLL located all the occult breast lesions successfully. In the ROLL group, the specimen volume was smaller and there were more cases with clear margins (P Conclusion ROLL is an effective method for the excision of non-palpable breast lesions. It enables more careful planning of the cutaneous incision, leading to better aesthetic results, less postoperative symptoms, and smaller volumes of excised tissue.
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
Quantum Monte Carlo methods and strongly correlated electrons on honeycomb structures
Energy Technology Data Exchange (ETDEWEB)
Lang, Thomas C.
2010-12-16
In this thesis we apply recently developed, as well as sophisticated quantum Monte Carlo methods to numerically investigate models of strongly correlated electron systems on honeycomb structures. The latter are of particular interest owing to their unique properties when simulating electrons on them, like the relativistic dispersion, strong quantum fluctuations and their resistance against instabilities. This work covers several projects including the advancement of the weak-coupling continuous time quantum Monte Carlo and its application to zero temperature and phonons, quantum phase transitions of valence bond solids in spin-1/2 Heisenberg systems using projector quantum Monte Carlo in the valence bond basis, and the magnetic field induced transition to a canted antiferromagnet of the Hubbard model on the honeycomb lattice. The emphasis lies on two projects investigating the phase diagram of the SU(2) and the SU(N)-symmetric Hubbard model on the hexagonal lattice. At sufficiently low temperatures, condensed-matter systems tend to develop order. An exception are quantum spin-liquids, where fluctuations prevent a transition to an ordered state down to the lowest temperatures. Previously elusive in experimentally relevant microscopic two-dimensional models, we show by means of large-scale quantum Monte Carlo simulations of the SU(2) Hubbard model on the honeycomb lattice, that a quantum spin-liquid emerges between the state described by massless Dirac fermions and an antiferromagnetically ordered Mott insulator. This unexpected quantum-disordered state is found to be a short-range resonating valence bond liquid, akin to the one proposed for high temperature superconductors. Inspired by the rich phase diagrams of SU(N) models we study the SU(N)-symmetric Hubbard Heisenberg quantum antiferromagnet on the honeycomb lattice to investigate the reliability of 1/N corrections to large-N results by means of numerically exact QMC simulations. We study the melting of phases
International Nuclear Information System (INIS)
Zhu Hongxun; Pan Hongping; Jian Xingxiang
2008-01-01
Prognosis and evaluation of uranium resources in Jiangzha region, Sichuan province are carried out through the multiple correlation analysis method. Through combining the characteristics of the methods and geology circumstance of areas to be predict, the uranium source, rock types, structure, terrain, hot springs and red basin are selected as estimation variable (factor). The original data of reference and predict unit are listed first, t