WorldWideScience

Sample records for centroid finding method

  1. Centroid finding method for position-sensitive detectors

    International Nuclear Information System (INIS)

    Radeka, V.; Boie, R.A.

    1979-10-01

    A new centroid finding method for all detectors where the signal charge is collected or induced on strips of wires, or on subdivided resistive electrodes, is presented. The centroid of charge is determined by convolution of the sequentially switched outputs from these subdivisions or from the strips with a linear centroid finding filter. The position line width is inversely proportional to N/sup 3/2/, where N is the number of subdivisions

  2. Centroid finding method for position-sensitive detectors

    International Nuclear Information System (INIS)

    Radeka, V.; Boie, R.A.

    1980-01-01

    A new centroid finding method for all detectors where the signal charge is collected or induced on strips or wires, or on subdivided resistive electrodes, is presented. The centroid of charge is determined by convolution of the sequentially switched outputs from these subdivisions or from the strips with a linear centroid finding filter. The position line width is inversely proportional to N 3 sup(/) 2 , where N is the number of subdivisions. (orig.)

  3. Modification of backgammon shape cathode and graded charge division readout method for a novel triple charge division centroid finding method

    International Nuclear Information System (INIS)

    Javanmardi, F.; Matoba, M.; Sakae, T.

    1996-01-01

    Triple Charge Division (TCD) centroid finding method that uses modified pattern of Backgammon Shape Cathode (MBSC) is introduced for medium range length position sensitive detectors with optimum numbers of cathode segments. MBSC pattern has three separated areas and uses saw tooth like insulator gaps for separating the areas. Side areas of the MBSC pattern are severed by a central common area. Size of the central area is twice of the size of both sides. Whereas central area is the widest area among three, both sides' areas have the main role in position sensing. With the same resolution and linearity, active region of original Backgammon pattern increases twice by using MBSC pattern, and with the same length, linearity of TCD centroid finding is much better than Backgammon charge division readout method. Linearity prediction of TCD centroid finding and experimental results conducted us to find an optimum truncation of the apices of MBCS pattern in the central area. The TCD centroid finding has an especial readout method since charges must be collected from two segments in both sides and from three segments in the central area of MBSC pattern. The so called Graded Charge Division (GCD) is the especial readout method for TCD. The GCD readout is a combination of the charge division readout and sequence grading of serial segments. Position sensing with TCD centroid finding and GCD readout were done by two sizes MBSC patterns (200mm and 80mm) and Spatial resolution about 1% of the detector length is achieved

  4. Centroid crossing

    International Nuclear Information System (INIS)

    Swift, G.

    1990-01-01

    This paper presents an algorithm for finding peaks in data spectra. It is based on calculating a moving centroid across the spectrum and picking off the points between which the calculated centroid crosses the channel number. Interpolation can then yield a more precise peak location. This algorithm can be implemented very efficiently requiring about one addition, subtraction, multiplication, and division operation per data point. With integer data and a centroid window equal to a power of two (so that the division can be done with shifts), the algorithm is particularly suited to efficient machine language implementation. With suitable adjustments (involving only little overhead except at suspected peaks), it is possible to minimize either false peak location or missing good peaks. Extending the method to more dimensions is straightforward although interpolating is more difficult. The algorithm has been used on a variety of nuclear data spectra with great success

  5. Statistical analysis of x-ray stress measurement by centroid method

    International Nuclear Information System (INIS)

    Kurita, Masanori; Amano, Jun; Sakamoto, Isao

    1982-01-01

    The X-ray technique allows a nondestructive and rapid measurement of residual stresses in metallic materials. The centroid method has an advantage over other X-ray methods in that it can determine the angular position of a diffraction line, from which the stress is calculated, even with an asymmetrical line profile. An equation for the standard deviation of the angular position of a diffraction line, σsub(p), caused by statistical fluctuation was derived, which is a fundamental source of scatter in X-ray stress measurements. This equation shows that an increase of X-ray counts by a factor of k results in a decrease of σsub(p) by a factor of 1/√k. It also shows that σsub(p) increases rapidly as the angular range used in calculating the centroid increases. It is therefore important to calculate the centroid using the narrow angular range between the two ends of the diffraction line where it starts to deviate from the straight background line. By using quenched structural steels JIS S35C and S45C, the residual stresses and their standard deviations were calculated by the centroid, parabola, Gaussian curve, and half-width methods, and the results were compared. The centroid of a diffraction line was affected greatly by the background line used. The standard deviation of the stress measured by the centroid method was found to be the largest among the four methods. (author)

  6. Implementation of the Centroid Method for the Correction of Turbulence

    Directory of Open Access Journals (Sweden)

    Enric Meinhardt-Llopis

    2014-07-01

    Full Text Available The centroid method for the correction of turbulence consists in computing the Karcher-Fréchet mean of the sequence of input images. The direction of deformation between a pair of images is determined by the optical flow. A distinguishing feature of the centroid method is that it can produce useful results from an arbitrarily small set of input images.

  7. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    Science.gov (United States)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  8. The efficiency of the centroid method compared to a simple average

    DEFF Research Database (Denmark)

    Eskildsen, Jacob Kjær; Kristensen, Kai; Nielsen, Rikke

    Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot.......Based on empirical data as well as a simulation study this paper gives recommendations with respect to situations wheere a simple avarage of the manifest indicators can be used as a close proxy for the centroid method and when it cannot....

  9. Adaptive thresholding and dynamic windowing method for automatic centroid detection of digital Shack-Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin Xiaoming; Li Xiang; Zhao Liping; Fang Zhongping

    2009-01-01

    A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.

  10. Noninvasive measurement of cardiopulmonary blood volume: evaluation of the centroid method

    International Nuclear Information System (INIS)

    Fouad, F.M.; MacIntyre, W.J.; Tarazi, R.C.

    1981-01-01

    Cardiopulmonary blood volume (CPV) and mean pulmonary transit time (MTT) determined by radionuclide measurements (Tc-99m HSA) were compared with values obtained from simultaneous dye-dilution (DD) studies (indocyanine green). The mean transit time was obtained from radionuclide curves by two methods: the peak-to-peak time and the interval between the two centroids determined from the right and left-ventricular time-concentration curves. Correlation of dye-dilution MTT and peak-to-peak time was significant (r = 0.79, p < 0.001), but its correlation with centroid-derived values was better (r = 0.86, p < 0.001). CPV values (using the centroid method for radionuclide technique) correlated significantly with values derived from dye-dilution curves (r = 0.74, p < 0.001). Discrepancies between the two were greater the more rapid the circulation (r = 0.61, p < 0.01), suggesting that minor inaccuracies of dye-dilution methods, due to positioning or delay of the system, can become magnified in hyperkinetic conditions. The radionuclide method is simple, repeatable, and noninvasive, and it provides simultaneous evaluation of pulmonary and systemic hemodynamics. Further, calculation of the ratio of cardiopulmonary to total blood volume can be used as an index of overall venous distensibility and relocation of intravascular blood volume

  11. A further investigation of the centroid-to-centroid method for stereotactic lung radiotherapy: A phantom study

    International Nuclear Information System (INIS)

    Lu, Bo; Samant, Sanjiv; Mittauer, Kathryn; Lee, Soyoung; Huang, Yin; Li, Jonathan; Kahler, Darren; Liu, Chihray

    2013-01-01

    Purpose: Our previous study [B. Lu et al., “A patient alignment solution for lung SBRT setups based on a deformable registration technique,” Med. Phys. 39(12), 7379–7389 (2012)] proposed a deformable-registration-based patient setup strategy called the centroid-to-centroid (CTC) method, which can perform an accurate alignment of internal-target-volume (ITV) centroids between averaged four-dimensional computed tomography and cone-beam computed tomography (CBCT) images. Scenarios with variations between CBCT and simulation CT caused by irregular breathing and/or tumor change were not specifically considered in the patient study [B. Lu et al., “A patient alignment solution for lung SBRT setups based on a deformable registration technique,” Med. Phys. 39(12), 7379–7389 (2012)] due to the lack of both a sufficiently large patient data sample and a method of tumor tracking. The aim of this study is to thoroughly investigate and compare the impacts of breathing pattern and tumor change on both the CTC and the translation-only (T-only) gray-value mode strategies by employing a four-dimensional (4D) lung phantom.Methods: A sophisticated anthropomorphic 4D phantom (CIRS Dynamic Thorax Phantom model 008) was employed to simulate all desired respiratory variations. The variation scenarios were classified into four groups: inspiration to expiration ratio (IE ratio) change, tumor trajectory change, tumor position change, tumor size change, and the combination of these changes. For each category the authors designed several scenarios to demonstrate the effects of different levels of breathing variation on both of the T-only and the CTC methods. Each scenario utilized 4DCT and CBCT scans. The ITV centroid alignment discrepancies for CTC and T-only were evaluated. The dose-volume-histograms (DVHs) of ITVs for two extreme cases were analyzed.Results: Except for some extreme cases in the combined group, the accuracy of the CTC registration was about 2 mm for all cases for

  12. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  13. Fast centroid algorithm for determining the surface plasmon resonance angle using the fixed-boundary method

    International Nuclear Information System (INIS)

    Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling

    2011-01-01

    To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted

  14. Centroid motion in periodically focused beams

    International Nuclear Information System (INIS)

    Moraes, J.S.; Pakter, R.; Rizzato, F.B.

    2005-01-01

    The role of the centroid dynamics in the transport of periodically focused particle beams is investigated. A Kapchinskij-Vladimirskij equilibrium distribution for an off-axis beam is derived. It is shown that centroid and envelope dynamics are uncoupled and that unstable regions for the centroid dynamics overlap with previously stable regions for the envelope dynamics alone. Multiparticle simulations validate the findings. The effects of a conducting pipe encapsulating the beam are also investigated. It is shown that the charge induced at the pipe may generate chaotic orbits which can be detrimental to the adequate functioning of the transport mechanism

  15. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    Science.gov (United States)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  16. A physics-motivated Centroidal Voronoi Particle domain decomposition method

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de

    2017-04-15

    In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.

  17. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    Science.gov (United States)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  18. Differential computation method used to calibrate the angle-centroid relationship in coaxial reverse Hartmann test

    Science.gov (United States)

    Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin

    2018-05-01

    A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.

  19. Performance evaluation of the spectral centroid downshift method for attenuation estimation.

    Science.gov (United States)

    Samimi, Kayvan; Varghese, Tomy

    2015-05-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequencydomain approaches applied to this problem. In this study, a statistical analysis of this method's performance was carried out based on a parametric model of the signal power spectrum in the presence of electronic noise. The parametric model used for the power spectrum of received RF data assumes a Gaussian spectral profile for the transmit pulse, and incorporates effects of attenuation, windowing, and electronic noise. Spectral moments were calculated and used to estimate second-order centroid statistics. A theoretical expression for the variance of a maximum likelihood estimator of attenuation coefficient was derived in terms of the centroid statistics and other model parameters, such as transmit pulse center frequency and bandwidth, RF data window length, SNR, and number of regression points. Theoretically predicted estimation variances were compared with experimentally estimated variances on RF data sets from both computer-simulated and physical tissue-mimicking phantoms. Scan parameter ranges for this study were electronic SNR from 10 to 70 dB, transmit pulse standard deviation from 0.5 to 4.1 MHz, transmit pulse center frequency from 2 to 8 MHz, and data window length from 3 to 17 mm. Acceptable agreement was observed between theoretical predictions and experimentally estimated values with differences smaller than 0.05 dB/cm/MHz across the parameter ranges investigated. This model helps predict the best attenuation estimation variance achievable with the CDS method, in terms of said scan parameters.

  20. A walk-free centroid method for lifetime measutement of 207Pb 569.7 keV state

    International Nuclear Information System (INIS)

    Gu Jiahui; Liu Jingyi; Xiao Genlai

    1988-01-01

    An improvement have been made in acquiring data of delayed coincidence spectra with ND-620 data acquisition system and off-line data analysis program. The delayed and anti-delayed coincidence spectra can be obtained in one run. The difference of their centroids is the mean lifetime τ. The centroid position of a delayed coincidence spectrum is the zero time of another delayed coincidence spectrum, so the requirement of measuring prompt time spectrum is avoided. The walk of prompt and delayed coincidence spectrum coming from different run are resolved and the walk during the measurement is compensated partly. The delayed coincidence time spectra of 207 Pb 569.7 keV state are measured and the half lifetime is calculated via three different methods (slop method, convolution method, centroid shift). The final value of half lifetime is 129.5±1.4ps. THe experimental reduced transition probability is compared with theoretical values

  1. Research on Centroid Position for Stairs Climbing Stability of Search and Rescue Robot

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2011-01-01

    Full Text Available This paper represents the relationship between the stability of stairs climbing and the centroid position of the search and rescue robot. The robot system is considered as a mass point-plane model and the kinematics features are analyzed to find the relationship between centroid position and the maximal pitch angle of stairs the robot could climb up. A computable function about this relationship is given in this paper. During the stairs climbing, there is a maximal stability-keeping angle depends on the centroid position and the pitch angle of stairs, and the numerical formula is developed about the relationship between the maximal stability-keeping angle and the centroid position and pitch angle of stairs. The experiment demonstrates the trustworthy and correction of the method in the paper.

  2. Lifetime measurements in {sup 170}Yb using the generalized centroid difference method

    Energy Technology Data Exchange (ETDEWEB)

    Karayonchev, Vasil; Regis, Jean-Marc; Jolie, Jan; Dannhoff, Moritz; Saed-Samii, Nima; Blazhev, Andrey [Institute of Nuclear Physics, University of Cologne, Cologne (Germany)

    2016-07-01

    An experiment using the electronic γ-γ ''fast-timing'' technique was performed at the 10 MV Tandem Van-De-Graaff accelerator of the Institute for Nuclear Physics, Cologne in order to measure lifetimes of the yrast states in {sup 170}Yb. The lifetime of the first 2{sup +} state was determined using the slope method, which means by fitting an exponential decay to the ''slope'' seen in the energy-gated time-difference spectra. The value of τ=2.201(57) ns is in good agreement with the lifetimes measured using other techniques. The lifetimes of the first 4{sup +} and the 6{sup +} states are determined for the first time. They are in the ps range and were measured using the generalized centroid difference method, an extension of the well-known centroid-shift method and developed for fast-timing arrays. The derived reduced transition probabilities B(E2) values are compared with calculations done using the confined beta soft model and show good agreement within the experimental uncertainties.

  3. A quantum generalization of intrinsic reaction coordinate using path integral centroid coordinates

    International Nuclear Information System (INIS)

    Shiga, Motoyuki; Fujisaki, Hiroshi

    2012-01-01

    We propose a generalization of the intrinsic reaction coordinate (IRC) for quantum many-body systems described in terms of the mass-weighted ring polymer centroids in the imaginary-time path integral theory. This novel kind of reaction coordinate, which may be called the ''centroid IRC,'' corresponds to the minimum free energy path connecting reactant and product states with a least amount of reversible work applied to the center of masses of the quantum nuclei, i.e., the centroids. We provide a numerical procedure to obtain the centroid IRC based on first principles by combining ab initio path integral simulation with the string method. This approach is applied to NH 3 molecule and N 2 H 5 - ion as well as their deuterated isotopomers to study the importance of nuclear quantum effects in the intramolecular and intermolecular proton transfer reactions. We find that, in the intramolecular proton transfer (inversion) of NH 3 , the free energy barrier for the centroid variables decreases with an amount of about 20% compared to the classical one at the room temperature. In the intermolecular proton transfer of N 2 H 5 - , the centroid IRC is largely deviated from the ''classical'' IRC, and the free energy barrier is reduced by the quantum effects even more drastically.

  4. Diffeomorphic Iterative Centroid Methods for Template Estimation on Large Datasets

    OpenAIRE

    Cury , Claire; Glaunès , Joan Alexis; Colliot , Olivier

    2014-01-01

    International audience; A common approach for analysis of anatomical variability relies on the stimation of a template representative of the population. The Large Deformation Diffeomorphic Metric Mapping is an attractive framework for that purpose. However, template estimation using LDDMM is computationally expensive, which is a limitation for the study of large datasets. This paper presents an iterative method which quickly provides a centroid of the population in the shape space. This centr...

  5. A variational centroid density procedure for the calculation of transmission coefficients for asymmetric barriers at low temperature

    International Nuclear Information System (INIS)

    Messina, M.; Schenter, G.K.; Garrett, B.C.

    1995-01-01

    The low temperature behavior of the centroid density method of Voth, Chandler, and Miller (VCM) [J. Chem. Phys. 91, 7749 (1989)] is investigated for tunneling through a one-dimensional barrier. We find that the bottleneck for a quantum activated process as defined by VCM does not correspond to the classical bottleneck for the case of an asymmetric barrier. If the centroid density is constrained to be at the classical bottleneck for an asymmetric barrier, the centroid density method can give transmission coefficients that are too large by as much as five orders of magnitude. We follow a variational procedure, as suggested by VCM, whereby the best transmission coefficient is found by varying the position of the centroid until the minimum value for this transmission coefficient is obtained. This is a procedure that is readily generalizable to multidimensional systems. We present calculations on several test systems which show that this variational procedure greatly enhances the accuracy of the centroid density method compared to when the centroid is constrained to be at the barrier top. Furthermore, the relation of this procedure to the low temperature periodic orbit or ''instanton'' approach is discussed. copyright 1995 American Institute of Physics

  6. Performance Evaluation of the Spectral Centroid Downshift Method for Attenuation Estimation

    OpenAIRE

    Samimi, Kayvan; Varghese, Tomy

    2015-01-01

    Estimation of frequency-dependent ultrasonic attenuation is an important aspect of tissue characterization. Along with other acoustic parameters studied in quantitative ultrasound, the attenuation coefficient can be used to differentiate normal and pathological tissue. The spectral centroid downshift (CDS) method is one the most common frequency-domain approaches applied to this problem. In this study, a statistical analysis of this method’s performance was carried out based on a parametric m...

  7. Two tree-formation methods for fast pattern search using nearest-neighbour and nearest-centroid matching

    NARCIS (Netherlands)

    Schomaker, Lambertus; Mangalagiu, D.; Vuurpijl, Louis; Weinfeld, M.; Schomaker, Lambert; Vuurpijl, Louis

    2000-01-01

    This paper describes tree­based classification of character images, comparing two methods of tree formation and two methods of matching: nearest neighbor and nearest centroid. The first method, Preprocess Using Relative Distances (PURD) is a tree­based reorganization of a flat list of patterns,

  8. An improved Q estimation approach: the weighted centroid frequency shift method

    Science.gov (United States)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao

    2016-06-01

    Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.

  9. Improvement of correlation-based centroiding methods for point source Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Li, Xuxu; Li, Xinyang; wang, Caixia

    2018-03-01

    This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.

  10. A Hybridized Centroid Technique for 3D Molodensky-Badekas ...

    African Journals Online (AJOL)

    Richannan

    the same point in a second reference frame (Ghilani, 2010). ... widely used approach by most researchers to compute values of centroid coordinates in the ... choice of centroid method on the Veis model has been investigated by Ziggah et al.

  11. Noise in position measurement by centroid calculation

    International Nuclear Information System (INIS)

    Volkov, P.

    1996-01-01

    The position of a particle trajectory in a gaseous (or semiconductor) detector can be measured by calculating the centroid of the induced charge on the cathode plane. The charge amplifiers attached to each cathode strip introduce noise which is added to the signal. This noise broadens the position resolution line. Our article gives an analytical tool to estimate the resolution broadening due to the noise per strip and the number of strips involved in the centroid calculation. It is shown that the position resolution increases faster than the square root of the number of strips involved. We also consider the consequence of added interstrip capacitors, intended to diminish the differential nonlinearity. It is shown that the position error increases slower than linearly with the interstrip capacities, due to the cancellation of correlated noise. The estimation we give, can be applied to calculations of position broadening other than the centroid finding. (orig.)

  12. Shack-Hartmann centroid detection method based on high dynamic range imaging and normalization techniques

    International Nuclear Information System (INIS)

    Vargas, Javier; Gonzalez-Fernandez, Luis; Quiroga, Juan Antonio; Belenguer, Tomas

    2010-01-01

    In the optical quality measuring process of an optical system, including diamond-turning components, the use of a laser light source can produce an undesirable speckle effect in a Shack-Hartmann (SH) CCD sensor. This speckle noise can deteriorate the precision and accuracy of the wavefront sensor measurement. Here we present a SH centroid detection method founded on computer-based techniques and capable of measurement in the presence of strong speckle noise. The method extends the dynamic range imaging capabilities of the SH sensor through the use of a set of different CCD integration times. The resultant extended range spot map is normalized to accurately obtain the spot centroids. The proposed method has been applied to measure the optical quality of the main optical system (MOS) of the mid-infrared instrument telescope smulator. The wavefront at the exit of this optical system is affected by speckle noise when it is illuminated by a laser source and by air turbulence because it has a long back focal length (3017 mm). Using the proposed technique, the MOS wavefront error was measured and satisfactory results were obtained.

  13. Optimisation of centroiding algorithms for photon event counting imaging

    International Nuclear Information System (INIS)

    Suhling, K.; Airey, R.W.; Morgan, B.L.

    1999-01-01

    Approaches to photon event counting imaging in which the output events of an image intensifier are located using a centroiding technique have long been plagued by fixed pattern noise in which a grid of dimensions similar to those of the CCD pixels is superimposed on the image. This is caused by a mismatch between the photon event shape and the centroiding algorithm. We have used hyperbolic cosine, Gaussian, Lorentzian, parabolic as well as 3-, 5-, and 7-point centre of gravity algorithms, and hybrids thereof, to assess means of minimising this fixed pattern noise. We show that fixed pattern noise generated by the widely used centre of gravity centroiding is due to intrinsic features of the algorithm. Our results confirm that the recently proposed use of Gaussian centroiding does indeed show a significant reduction of fixed pattern noise compared to centre of gravity centroiding (Michel et al., Mon. Not. R. Astron. Soc. 292 (1997) 611-620). However, the disadvantage of a Gaussian algorithm is a centroiding failure for small pulses, caused by a division by zero, which leads to a loss of detective quantum efficiency (DQE) and to small amounts of residual fixed pattern noise. Using both real data from an image intensifier system employing a progressive scan camera, framegrabber and PC, and also synthetic data from Monte-Carlo simulations, we find that hybrid centroiding algorithms can reduce the fixed pattern noise without loss of resolution or loss of DQE. Imaging a test pattern to assess the features of the different algorithms shows that a hybrid of Gaussian and 3-point centre of gravity centroiding algorithms results in an optimum combination of low fixed pattern noise (lower than a simple Gaussian), high DQE, and high resolution. The Lorentzian algorithm gives the worst results in terms of high fixed pattern noise and low resolution, and the Gaussian and hyperbolic cosine algorithms have the lowest DQEs

  14. The Centroid of a Lie Triple Algebra

    Directory of Open Access Journals (Sweden)

    Xiaohong Liu

    2013-01-01

    Full Text Available General results on the centroids of Lie triple algebras are developed. Centroids of the tensor product of a Lie triple algebra and a unitary commutative associative algebra are studied. Furthermore, the centroid of the tensor product of a simple Lie triple algebra and a polynomial ring is completely determined.

  15. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    International Nuclear Information System (INIS)

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-01-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.

  16. Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles

    International Nuclear Information System (INIS)

    Briggs, A.M.; Greig, A.M.; Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L.

    2007-01-01

    Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r 0.83). Correspondence between measures was moderate for the Cobb method (r 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)

  17. Radiographic measures of thoracic kyphosis in osteoporosis: Cobb and vertebral centroid angles

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, A.M.; Greig, A.M. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia); University of Melbourne, Department of Medicine, Royal Melbourne Hospital, Victoria (Australia); Wrigley, T.V.; Tully, E.A.; Adams, P.E.; Bennell, K.L. [University of Melbourne, Centre for Health, Exercise and Sports Medicine, School of Physiotherapy, Victoria (Australia)

    2007-08-15

    Several measures can quantify thoracic kyphosis from radiographs, yet their suitability for people with osteoporosis remains uncertain. The aim of this study was to examine the validity and reliability of the vertebral centroid and Cobb angles in people with osteoporosis. Lateral radiographs of the thoracic spine were captured in 31 elderly women with osteoporosis. Thoracic kyphosis was measured globally (T1-T12) and regionally (T4-T9) using Cobb and vertebral centroid angles. Multisegmental curvature was also measured by fitting polynomial functions to the thoracic curvature profile. Canonical and Pearson correlations were used to examine correspondence; agreement between measures was examined with linear regression. Moderate to high intra- and inter-rater reliability was achieved (SEM = 0.9-4.0 ). Concurrent validity of the simple measures was established against multisegmental curvature (r = 0.88-0.98). Strong association was observed between the Cobb and centroid angles globally (r = 0.84) and regionally (r = 0.83). Correspondence between measures was moderate for the Cobb method (r = 0.72), yet stronger for the centroid method (r = 0.80). The Cobb angle was 20% greater for regional measures due to the influence of endplate tilt. Regional Cobb and centroid angles are valid and reliable measures of thoracic kyphosis in people with osteoporosis. However, the Cobb angle is biased by endplate tilt, suggesting that the centroid angle is more appropriate for this population. (orig.)

  18. Centroids of effective interactions from measured single-particle energies: An application

    International Nuclear Information System (INIS)

    Cole, B.J.

    1990-01-01

    Centroids of the effective nucleon-nucleon interaction for the mass region A=28--64 are extracted directly from experimental single-particle spectra, by comparing single-particle energies relative to different cores. Uncertainties in the centroids are estimated at approximately 100 keV, except in cases of exceptional fragmentation of the single-particle strength. The use of a large number of inert cores allows the dependence of the interaction on mass or model space to be investigated. The method permits accurate empirical modifications to be made to realistic interactions calculated from bare nucleon-nucleon potentials, which are known to possess defective centroids in many cases. In addition, the centroids can be used as input to the more sophisticated fitting procedures that are employed to produce matrix elements of the effective interaction

  19. Study on Zero-Doppler Centroid Control for GEO SAR Ground Observation

    Directory of Open Access Journals (Sweden)

    Yicheng Jiang

    2014-01-01

    Full Text Available In geosynchronous Earth orbit SAR (GEO SAR, Doppler centroid compensation is a key step for imaging process, which could be performed by the attitude steering of a satellite platform. However, this zero-Doppler centroid control method does not work well when the look angle of radar is out of an expected range. This paper primarily analyzes the Doppler properties of GEO SAR in the Earth rectangular coordinate. Then, according to the actual conditions of the GEO SAR ground observation, the effective range is presented by the minimum and maximum possible look angles which are directly related to the orbital parameters. Based on the vector analysis, a new approach for zero-Doppler centroid control in GEO SAR, performing the attitude steering by a combination of pitch and roll rotation, is put forward. This approach, considering the Earth’s rotation and elliptical orbit effects, can accurately reduce the residual Doppler centroid. All the simulation results verify the correctness of the range of look angle and the proposed steering method.

  20. Automatic centroid detection and surface measurement with a digital Shack–Hartmann wavefront sensor

    International Nuclear Information System (INIS)

    Yin, Xiaoming; Zhao, Liping; Li, Xiang; Fang, Zhongping

    2010-01-01

    With the breakthrough of manufacturing technologies, the measurement of surface profiles is becoming a big issue. A Shack–Hartmann wavefront sensor (SHWS) provides a promising technology for non-contact surface measurement with a number of advantages over interferometry. The SHWS splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. So the accuracy of the centroid measurement determines the accuracy of the SHWS. In this paper, we have presented a new centroid measurement algorithm based on an adaptive thresholding and dynamic windowing method by utilizing image-processing techniques. Based on this centroid detection method, we have developed a digital SHWS system which can automatically detect centroids of focal spots, reconstruct the wavefront and measure the 3D profile of the surface. The system has been tested with various simulated and real surfaces such as flat surfaces, spherical and aspherical surfaces as well as deformable surfaces. The experimental results demonstrate that the system has good accuracy, repeatability and immunity to optical misalignment. The system is also suitable for on-line applications of surface measurement

  1. Centroid vetting of transiting planet candidates from the Next Generation Transit Survey

    Science.gov (United States)

    Günther, Maximilian N.; Queloz, Didier; Gillen, Edward; McCormac, James; Bayliss, Daniel; Bouchy, Francois; Walker, Simon. R.; West, Richard G.; Eigmüller, Philipp; Smith, Alexis M. S.; Armstrong, David J.; Burleigh, Matthew; Casewell, Sarah L.; Chaushev, Alexander P.; Goad, Michael R.; Grange, Andrew; Jackman, James; Jenkins, James S.; Louden, Tom; Moyano, Maximiliano; Pollacco, Don; Poppenhaeger, Katja; Rauer, Heike; Raynard, Liam; Thompson, Andrew P. G.; Udry, Stéphane; Watson, Christopher A.; Wheatley, Peter J.

    2017-11-01

    The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterization. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully automated centroid vetting algorithm developed for NGTS, enabled by our high-precision autoguiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel (mpix), and down to 0.25 mpix for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.

  2. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Purpose: Pixel size is an important parameter of gamma camera and SPECT. A number of Methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source(PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (Xm) is an approximation of the true count-centroid (Xp) of the PS, i.e. Xm=Xp+(Xb-Xp)/(1+Rp/Rb), where Rp is the net counting rate of the PS, Xb the background count-centroid and Rb the background counting rate. To get accurate measurement, Rp must be very big, which is unpractical, resulting in the variation of measured pixel size. Rp-independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (Xb-Xp)/(1+Rp/Rb) by bringing Xb closer to Xp and by reducing Rb. In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=I-(0.5)D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent Xp. The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128*128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10cps to 1183cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01( mean= 3.01±0.00) as Rp increased

  3. Alteração no método centroide de avaliação da adaptabilidade genotípica Alteration of the centroid method to evaluate genotypic adaptability

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2009-03-01

    Full Text Available O objetivo deste trabalho foi alterar o método centroide de avaliação da adaptabilidade e estabilidade fenotípica de genótipos, para deixá-lo com maior sentido biológico e melhorar aspectos quantitativos e qualitativos de sua análise. A alteração se deu pela adição de mais três ideótipos, definidos de acordo com valores médios dos genótipos nos ambientes. Foram utilizados dados provenientes de um experimento sobre produção de matéria seca de 92 genótipos de alfafa (Medicago sativa realizado em blocos ao acaso, com duas repetições. Os genótipos foram submetidos a 20 cortes, no período de novembro de 2004 a junho de 2006. Cada corte foi considerado um ambiente. A inclusão dos ideótipos de maior sentido biológico (valores médios nos ambientes resultou em uma dispersão gráfica em forma de uma seta voltada para a direita, na qual os genótipos mais produtivos ficaram próximos à ponta da seta. Com a alteração, apenas cinco genótipos foram classificados nas mesmas classes do método centroide original. A figura em forma de seta proporciona uma comparação direta dos genótipos, por meio da formação de um gradiente de produtividade. A alteração no método mantém a facilidade de interpretação dos resultados para a recomendação dos genótipos presente no método original e não permite duplicidade de interpretação dos resultados.ABSTRACT The objective of this work was to modify the centroid method of evaluation of phenotypic adaptability and the phenotype stability of genotypes in order for the method to make greater biological sense and improve its quantitative and qualitative performance. The method was modified by means of the inclusion of three additional ideotypes defined in accordance with the genotypes' average yield in the environments tested. The alfalfa (Medicago sativa L. forage yield of 92 genotypes was used. The trial had a randomized block design, with two replicates, and the data were used to

  4. Analysis of the positon resolution in centroid measurements in MWPC

    International Nuclear Information System (INIS)

    Gatti, E.; Longoni, A.

    1981-01-01

    Resolution limits in avalanche localization along the anode wires of an MWPC with cathodes connected by resistors and equally spaced amplifiers, are evaluated. A simple weighted-centroid method and a highly linear method based on a linear centroid finding filter, are considered. The contributions to the variance of the estimator of the avalanche position, due to the series noise of the amplifiers and to the thermal noise of the resistive line are separately calculated and compared. A comparison is made with the resolution of the MWPC with isolated cathodes. The calculations are performed with a distributed model of the diffusive line formed by the cathodes and the resistors. A comparison is also made with the results obtained with a simple lumped model of the diffusive line. A number of graphs useful in determining the best parameters of a MWPC, with a specified position and time resolution, are given. It has been found that, for short resolution times, an MWPC with cathodes connected by resitors presents better resolution (lower variance of the estimator of the avalanche position) than an MWPC with isolated cathodes. Conversely, for long resolution times, the variance of the estimator of the avalanche position is lower in an MWPC with isolated cathodes. (orig.)

  5. The mirror symmetric centroid difference method for picosecond lifetime measurements via {gamma}-{gamma} coincidences using very fast LaBr{sub 3}(Ce) scintillator detectors

    Energy Technology Data Exchange (ETDEWEB)

    Regis, J.-M., E-mail: regis@ikp.uni-koeln.d [Institut fuer Kernphysik, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany); Pascovici, G.; Jolie, J.; Rudigier, M. [Institut fuer Kernphysik, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)

    2010-10-01

    The ultra-fast timing technique was introduced in the 1980s and is capable of measuring picosecond lifetimes of nuclear excited states with about 3 ps accuracy. Very fast scintillator detectors are connected to an electronic timing circuit and detector vs. detector time spectra are analyzed by means of the centroid shift method. The very good 3% energy resolution of the nowadays available LaBr{sub 3}(Ce) scintillator detectors for {gamma}-rays has made possible an extension of the well-established fast timing technique. The energy dependent fast timing characteristics or the prompt curve, respectively, of the LaBr{sub 3}(Ce) scintillator detector has been measured using a standard {sup 152}Eu {gamma}-ray source. For any energy combination in the range of 200keVcentroid shift method providing very attractive features for picosecond lifetime measurements is presented. The mirror symmetric centroid difference method takes advantage of the symmetry obtained when performing {gamma}-{gamma} lifetime measurements using a pair of almost identical very fast scintillator detectors. In particular cases, the use of the mirror symmetric centroid difference method also allows the direct determination of picosecond lifetimes, hence without the need of calibrating the prompt curve.

  6. A walk-free centroid method for lifetime measurements with pulsed beams

    International Nuclear Information System (INIS)

    Julin, R.; Kantele, J.; Luontama, M.; Passoja, A.; Poikolainen, T.

    1977-09-01

    A delayed-coincidence lifetime measurement method based on a comparison of walk-free centroids of time spectra is presented. The time is measured between the cyclotron RF signal and the pulse from a plastic scintillation detector followed by a fixed energy selection. The events to be time-analyzed are selected from the associated charge-particle spectrum of a silicon detector which is operated in coincidence with the scintillator, i.e., independently of the formation of the signal containing the time information. With this technique, with the micropulse FWHM of typically 500 to 700 ps, half-lives down to the 10 ps region can be measured. The following half-lives are obtained with the new method: 160+-6 ps for the 2032 keV level in 209 Pb; 45+-10 ps and 160+-20 ps for the 1756.8 keV (0 2 + ) and 2027.3 keV (0 3 + ) levels in 116 Sn, respectively. (author)

  7. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    Directory of Open Access Journals (Sweden)

    Md Khayrul Bashar

    Full Text Available Accurate identification of cell nuclei and their tracking using three dimensional (3D microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i Pre-processing, (ii Local enhancement, and (iii Centroid extraction. The first step includes two variations: first variation (Variant-1 uses the whole 3D pre-processed image, whereas the second one (Variant-2 modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2

  8. Optimizing the calculation of point source count-centroid in pixel size measurement

    International Nuclear Information System (INIS)

    Zhou Luyi; Kuang Anren; Su Xianyu

    2004-01-01

    Pixel size is an important parameter of gamma camera and SPECT. A number of methods are used for its accurate measurement. In the original count-centroid method, where the image of a point source (PS) is acquired and its count-centroid calculated to represent PS position in the image, background counts are inevitable. Thus the measured count-centroid (X m ) is an approximation of the true count-centroid (X p ) of the PS, i.e. X m =X p + (X b -X p )/(1+R p /R b ), where Rp is the net counting rate of the PS, X b the background count-centroid and Rb the background counting. To get accurate measurement, R p must be very big, which is unpractical, resulting in the variation of measured pixel size. R p -independent calculation of PS count-centroid is desired. Methods: The proposed method attempted to eliminate the effect of the term (X b -X p )/(1 + R p /R b ) by bringing X b closer to X p and by reducing R b . In the acquired PS image, a circular ROI was generated to enclose the PS, the pixel with the maximum count being the center of the ROI. To choose the diameter (D) of the ROI, a Gaussian count distribution was assumed for the PS, accordingly, K=1-(0.5) D/R percent of the total PS counts was in the ROI, R being the full width at half maximum of the PS count distribution. D was set to be 6*R to enclose most (K=98.4%) of the PS counts. The count-centroid of the ROI was calculated to represent X p . The proposed method was tested in measuring the pixel size of a well-tuned SPECT, whose pixel size was estimated to be 3.02 mm according to its mechanical and electronic setting (128 x 128 matrix, 387 mm UFOV, ZOOM=1). For comparison, the original method, which was use in the former versions of some commercial SPECT software, was also tested. 12 PSs were prepared and their image acquired and stored. The net counting rate of the PSs increased from 10 cps to 1183 cps. Results: Using the proposed method, the measured pixel size (in mm) varied only between 3.00 and 3.01 (mean

  9. Systematic shifts of evaluated charge centroid for the cathode read-out multiwire proportional chamber

    International Nuclear Information System (INIS)

    Endo, I.; Kawamoto, T.; Mizuno, Y.; Ohsugi, T.; Taniguchi, T.; Takeshita, T.

    1981-01-01

    We have investigated the systematic error associtated with the charge centroid evaluation for the cathode read-out multiwire proportional chamber. Correction curves for the systematic error according to six centroid finding algorithms have been obtained by using the charge distribution calculated in a simple electrostatic mode. They have been experimentally examined and proved to be essential for the accurate determination of the irradiated position. (orig.)

  10. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  11. Networks and centroid metrics for understanding football

    African Journals Online (AJOL)

    Gonçalo Dias

    games. However, it seems that the centroid metric, supported only by the position of players in the field ...... the strategy adopted by the coach (Gama et al., 2014). ... centroid distance as measures of team's tactical performance in youth football.

  12. Transverse centroid oscillations in solenoidially focused beam transport lattices

    International Nuclear Information System (INIS)

    Lund, Steven M.; Wootton, Christopher J.; Lee, Edward P.

    2009-01-01

    Transverse centroid oscillations are analyzed for a beam in a solenoid transport lattice. Linear equations of motion are derived that describe small-amplitude centroid oscillations induced by displacement and rotational misalignments of the focusing solenoids in the transport lattice, dipole steering elements, and initial centroid offset errors. These equations are analyzed in a local rotating Larmor frame to derive complex-variable 'alignment functions' and 'bending functions' that efficiently describe the characteristics of the centroid oscillations induced by both mechanical misalignments of the solenoids and dipole steering elements. The alignment and bending functions depend only on the properties of the ideal lattice in the absence of errors and steering, and have associated expansion amplitudes set by the misalignments and steering fields, respectively. Applications of this formulation are presented for statistical analysis of centroid oscillations, calculation of actual lattice misalignments from centroid measurements, and optimal beam steering.

  13. Doppler Centroid Estimation for Airborne SAR Supported by POS and DEM

    Directory of Open Access Journals (Sweden)

    CHENG Chunquan

    2015-05-01

    Full Text Available It is difficult to estimate the Doppler frequency and modulating rate for airborne SAR by using traditional vector method due to instable flight and complex terrain. In this paper, it is qualitatively analyzed that the impacts of POS, DEM and their errors on airborne SAR Doppler parameters. Then an innovative vector method is presented based on the range-coplanarity equation to estimate the Doppler centroid taking the POS and DEM as auxiliary data. The effectiveness of the proposed method is validated and analyzed via the simulation experiments. The theoretical analysis and experimental results show that the method can be used to estimate the Doppler centroid with high accuracy even in the cases of high relief, instable flight, and large squint SAR.

  14. Networks and centroid metrics for understanding football | Gama ...

    African Journals Online (AJOL)

    This study aimedto verifythe network of contacts resulting from the collective behaviour of professional football teams through the centroid method and networks as well, therebyproviding detailed information about the match to coaches and sport analysts. For this purpose, 999 collective attacking actions from twoteams were ...

  15. Bayesian centroid estimation for motif discovery.

    Science.gov (United States)

    Carvalho, Luis

    2013-01-01

    Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.

  16. Bayesian centroid estimation for motif discovery.

    Directory of Open Access Journals (Sweden)

    Luis Carvalho

    Full Text Available Biological sequences may contain patterns that signal important biomolecular functions; a classical example is regulation of gene expression by transcription factors that bind to specific patterns in genomic promoter regions. In motif discovery we are given a set of sequences that share a common motif and aim to identify not only the motif composition, but also the binding sites in each sequence of the set. We propose a new centroid estimator that arises from a refined and meaningful loss function for binding site inference. We discuss the main advantages of centroid estimation for motif discovery, including computational convenience, and how its principled derivation offers further insights about the posterior distribution of binding site configurations. We also illustrate, using simulated and real datasets, that the centroid estimator can differ from the traditional maximum a posteriori or maximum likelihood estimators.

  17. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution.

    Science.gov (United States)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-07

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  18. Formulation of state projected centroid molecular dynamics: Microcanonical ensemble and connection to the Wigner distribution

    Science.gov (United States)

    Orr, Lindsay; Hernández de la Peña, Lisandro; Roy, Pierre-Nicholas

    2017-06-01

    A derivation of quantum statistical mechanics based on the concept of a Feynman path centroid is presented for the case of generalized density operators using the projected density operator formalism of Blinov and Roy [J. Chem. Phys. 115, 7822-7831 (2001)]. The resulting centroid densities, centroid symbols, and centroid correlation functions are formulated and analyzed in the context of the canonical equilibrium picture of Jang and Voth [J. Chem. Phys. 111, 2357-2370 (1999)]. The case where the density operator projects onto a particular energy eigenstate of the system is discussed, and it is shown that one can extract microcanonical dynamical information from double Kubo transformed correlation functions. It is also shown that the proposed projection operator approach can be used to formally connect the centroid and Wigner phase-space distributions in the zero reciprocal temperature β limit. A Centroid Molecular Dynamics (CMD) approximation to the state-projected exact quantum dynamics is proposed and proven to be exact in the harmonic limit. The state projected CMD method is also tested numerically for a quartic oscillator and a double-well potential and found to be more accurate than canonical CMD. In the case of a ground state projection, this method can resolve tunnelling splittings of the double well problem in the higher barrier regime where canonical CMD fails. Finally, the state-projected CMD framework is cast in a path integral form.

  19. Hybridized centroid technique for 3D Molodensky-Badekas ...

    African Journals Online (AJOL)

    In view of this, the present study developed and tested two new hybrid centroid techniques known as the harmonic-quadratic mean and arithmetic-quadratic mean centroids. The proposed hybrid approaches were compared with the geometric mean, harmonic mean, median, quadratic mean and arithmetic mean. In addition ...

  20. Comparison of pure and 'Latinized' centroidal Voronoi tessellation against various other statistical sampling methods

    International Nuclear Information System (INIS)

    Romero, Vicente J.; Burkardt, John V.; Gunzburger, Max D.; Peterson, Janet S.

    2006-01-01

    A recently developed centroidal Voronoi tessellation (CVT) sampling method is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these sampling methods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities

  1. Ranking Fuzzy Numbers with a Distance Method using Circumcenter of Centroids and an Index of Modality

    Directory of Open Access Journals (Sweden)

    P. Phani Bushan Rao

    2011-01-01

    Full Text Available Ranking fuzzy numbers are an important aspect of decision making in a fuzzy environment. Since their inception in 1965, many authors have proposed different methods for ranking fuzzy numbers. However, there is no method which gives a satisfactory result to all situations. Most of the methods proposed so far are nondiscriminating and counterintuitive. This paper proposes a new method for ranking fuzzy numbers based on the Circumcenter of Centroids and uses an index of optimism to reflect the decision maker's optimistic attitude and also an index of modality that represents the neutrality of the decision maker. This method ranks various types of fuzzy numbers which include normal, generalized trapezoidal, and triangular fuzzy numbers along with crisp numbers with the particularity that crisp numbers are to be considered particular cases of fuzzy numbers.

  2. Depth to the bottom of magnetic sources (DBMS) from aeromagnetic data of Central India using modified centroid method for fractal distribution of sources

    Science.gov (United States)

    Bansal, A. R.; Anand, S. P.; Rajaram, Mita; Rao, V. K.; Dimri, V. P.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  3. Centroid and full-width at half maximum uncertainties of histogrammed data with an underlying Gaussian distribution -- The moments method

    International Nuclear Information System (INIS)

    Valentine, J.D.; Rana, A.E.

    1996-01-01

    The effect of approximating a continuous Gaussian distribution with histogrammed data are studied. The expressions for theoretical uncertainties in centroid and full-width at half maximum (FWHM), as determined by calculation of moments, are derived using the error propagation method for a histogrammed Gaussian distribution. The results are compared with the corresponding pseudo-experimental uncertainties for computer-generated histogrammed Gaussian peaks to demonstrate the effect of binning the data. It is shown that increasing the number of bins in the histogram improves the continuous distribution approximation. For example, a FWHM ≥ 9 and FWHM ≥ 12 bins are needed to reduce the pseudo-experimental standard deviation of FWHM to within ≥5% and ≥1%, respectively, of the theoretical value for a peak containing 10,000 counts. In addition, the uncertainties in the centroid and FWHM as a function of peak area are studied. Finally, Sheppard's correction is applied to partially correct for the binning effect

  4. A focal plane metrology system and PSF centroiding experiment

    Science.gov (United States)

    Li, Haitao; Li, Baoquan; Cao, Yang; Li, Ligang

    2016-10-01

    In this paper, we present an overview of a detector array equipment metrology testbed and a micro-pixel centroiding experiment currently under development at the National Space Science Center, Chinese Academy of Sciences. We discuss on-going development efforts aimed at calibrating the intra-/inter-pixel quantum efficiency and pixel positions for scientific grade CMOS detector, and review significant progress in achieving higher precision differential centroiding for pseudo star images in large area back-illuminated CMOS detector. Without calibration of pixel positions and intrapixel response, we have demonstrated that the standard deviation of differential centroiding is below 2.0e-3 pixels.

  5. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-01-01

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  6. Plasma Channel Diagnostic Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2010-01-01

    A technique has been developed for measuring the properties of discharge-based plasma channels by monitoring the centroid location of a laser beam exiting the channel as a function of input alignment offset between the laser and the channel. The centroid position of low-intensity ( 14 Wcm -2 ) laser pulses focused at the input of a hydrogen-filled capillary discharge waveguide was scanned and the exit positions recorded to determine the channel shape and depth with an accuracy of a few %. In addition, accurate alignment of the laser beam through the plasma channel can be provided by minimizing laser centroid motion at the channel exit as the channel depth is scanned either by scanning the plasma density or the discharge timing. The improvement in alignment accuracy provided by this technique will be crucial for minimizing electron beam pointing errors in laser plasma accelerators.

  7. An Adaptive Connectivity-based Centroid Algorithm for Node Positioning in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Aries Pratiarso

    2015-06-01

    Full Text Available In wireless sensor network applications, the position of nodes is randomly distributed following the contour of the observation area. A simple solution without any measurement tools is provided by range-free method. However, this method yields the coarse estimating position of the nodes. In this paper, we propose Adaptive Connectivity-based (ACC algorithm. This algorithm is a combination of Centroid as range-free based algorithm, and hop-based connectivity algorithm. Nodes have a possibility to estimate their own position based on the connectivity level between them and their reference nodes. Each node divides its communication range into several regions where each of them has a certain weight depends on the received signal strength. The weighted value is used to obtain the estimated position of nodes. Simulation result shows that the proposed algorithm has up to 3 meter error of estimated position on 100x100 square meter observation area, and up to 3 hop counts for 80 meters' communication range. The proposed algorithm performs an average error positioning up to 10 meters better than Weighted Centroid algorithm. Keywords: adaptive, connectivity, centroid, range-free.

  8. Uncertainty Quantification in Earthquake Source Characterization with Probabilistic Centroid Moment Tensor Inversion

    Science.gov (United States)

    Dettmer, J.; Benavente, R. F.; Cummins, P. R.

    2017-12-01

    This work considers probabilistic, non-linear centroid moment tensor inversion of data from earthquakes at teleseismic distances. The moment tensor is treated as deviatoric and centroid location is parametrized with fully unknown latitude, longitude, depth and time delay. The inverse problem is treated as fully non-linear in a Bayesian framework and the posterior density is estimated with interacting Markov chain Monte Carlo methods which are implemented in parallel and allow for chain interaction. The source mechanism and location, including uncertainties, are fully described by the posterior probability density and complex trade-offs between various metrics are studied. These include the percent of double couple component as well as fault orientation and the probabilistic results are compared to results from earthquake catalogs. Additional focus is on the analysis of complex events which are commonly not well described by a single point source. These events are studied by jointly inverting for multiple centroid moment tensor solutions. The optimal number of sources is estimated by the Bayesian information criterion to ensure parsimonious solutions. [Supported by NSERC.

  9. On the timing properties of germanium detectors: The centroid diagrams of prompt photopeaks and Compton events

    International Nuclear Information System (INIS)

    Penev, I.; Andrejtscheff, W.; Protochristov, Ch.; Zhelev, Zh.

    1987-01-01

    In the applications of the generalized centroid shift method with germanium detectors, the energy dependence of the time centroids of prompt photopeaks (zero-time line) and of Compton background events reveal a peculiar behavior crossing each other at about 100 keV. The effect is plausibly explained as associated with the ratio of γ-quanta causing the photoeffect and Compton scattering, respectively, at the boundaries of the detector. (orig.)

  10. MOBIUS-STRIP-LIKE COLUMNAR FUNCTIONAL CONNECTIONS ARE REVEALED IN SOMATO-SENSORY RECEPTIVE FIELD CENTROIDS.

    Directory of Open Access Journals (Sweden)

    James Joseph Wright

    2014-10-01

    Full Text Available Receptive fields of neurons in the forelimb region of areas 3b and 1 of primary somatosensory cortex, in cats and monkeys, were mapped using extracellular recordings obtained sequentially from nearly radial penetrations. Locations of the field centroids indicated the presence of a functional system, in which cortical homotypic representations of the limb surfaces are entwined in three-dimensional Mobius-strip-like patterns of synaptic connections. Boundaries of somatosensory receptive field in nested groups irregularly overlie the centroid order, and are interpreted as arising from the superposition of learned connections upon the embryonic order. Since the theory of embryonic synaptic self-organisation used to model these results was devised and earlier used to explain findings in primary visual cortex, the present findings suggest the theory may be of general application throughout cortex, and may reveal a modular functional synaptic system, which, only in some parts of the cortex, and in some species, is manifest as anatomical ordering into columns.

  11. The generalized centroid difference method for picosecond sensitive determination of lifetimes of nuclear excited states using large fast-timing arrays

    Energy Technology Data Exchange (ETDEWEB)

    Régis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Mach, H. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Simpson, G.S. [Laboratoire de Physique Subatomique et de Cosmologie Grenoble, 53, rue des Martyrs, 38026 Grenoble Cedex (France); Jolie, J.; Pascovici, G.; Saed-Samii, N.; Warr, N. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Bruce, A. [School of Computing, Engineering and Mathematics, University of Brighton, Lewes Road, Brighton BN2 4GJ (United Kingdom); Degenkolb, J. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Fraile, L.M. [Departamento de Física Atómica y Nuclear, Universidad Complutense, 28040 Madrid (Spain); Fransen, C. [Institut für Kernphysik der Universität zu Köln, Zülpicher Str. 77, 50937 Köln (Germany); Ghita, D.G. [Horia Hulubei National Institute for Physics and Nuclear Engineering, 77125 Bucharest (Romania); and others

    2013-10-21

    A novel method for direct electronic “fast-timing” lifetime measurements of nuclear excited states via γ–γ coincidences using an array equipped with N∈N equally shaped very fast high-resolution LaBr{sub 3}(Ce) scintillator detectors is presented. Analogous to the mirror symmetric centroid difference method, the generalized centroid difference method provides two independent “start” and “stop” time spectra obtained by a superposition of the N(N−1)γ–γ time difference spectra of the N detector fast-timing system. The two fast-timing array time spectra correspond to a forward and reverse gating of a specific γ–γ cascade. Provided that the energy response and the electronic time pick-off of the detectors are almost equal, a mean prompt response difference between start and stop events is calibrated and used as a single correction for lifetime determination. These combined fast-timing arrays mean γ–γ time-walk characteristics can be determined for 40keVmethod over the total dynamic range is mainly determined by the statistics. The setup of an N=4 detector fast-timing array delivered an absolute time resolving power of 3 ps for 10 000 γ–γ events per total fast timing array start and stop time spectrum. The new method is tested over the total dynamic range by the measurements of known picosecond lifetimes in standard γ-ray sources.

  12. FINGERPRINT MATCHING BASED ON PORE CENTROIDS

    Directory of Open Access Journals (Sweden)

    S. Malathi

    2011-05-01

    Full Text Available In recent years there has been exponential growth in the use of bio- metrics for user authentication applications. Automated Fingerprint Identification systems have become popular tool in many security and law enforcement applications. Most of these systems rely on minutiae (ridge ending and bifurcation features. With the advancement in sensor technology, high resolution fingerprint images (1000 dpi pro- vide micro level of features (pores that have proven to be useful fea- tures for identification. In this paper, we propose a new strategy for fingerprint matching based on pores by reliably extracting the pore features The extraction of pores is done by Marker Controlled Wa- tershed segmentation method and the centroids of each pore are con- sidered as feature vectors for matching of two fingerprint images. Experimental results shows that the proposed method has better per- formance with lower false rates and higher accuracy.

  13. Major shell centroids in the symplectic collective model

    International Nuclear Information System (INIS)

    Draayer, J.P.; Rosensteel, G.; Tulane Univ., New Orleans, LA

    1983-01-01

    Analytic expressions are given for the major shell centroids of the collective potential V(#betta#, #betta#) and the shape observable #betta# 2 in the Sp(3,R) symplectic model. The tools of statistical spectroscopy are shown to be useful, firstly, in translating a requirement that the underlying shell structure be preserved into constraints on the parameters of the collective potential and, secondly, in giving a reasonable estimate for a truncation of the infinite dimensional symplectic model space from experimental B(E2) transition strengths. Results based on the centroid information are shown to compare favorably with results from exact calculations in the case of 20 Ne. (orig.)

  14. Measurement of centroid trajectory of Dragon-I electron beam

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Wang Yuan; Zhang Wenwei; Zhang Kaizhi; Li Jing; Li Chenggang; Yang Guojun

    2005-01-01

    The control of the electron beam in an intense current linear induction accelerator (LIA) is very important. The center position of the electron beam and the beam profile are two important parameters which should be measured accurately. The setup of a time-resolved measurement system and a data processing method for determining the beam center position are introduced for the purpose of obtaining Dragon-I electron beam trajectory including beam profile. The actual results show that the centroid position error can be controlled in one to two pixels. the time-resolved beam centroid trajectory of Dragon-I (18.5 MeV, 2 kA, 90 ns) is obtained recently in 10 ns interval, 3 ns exposure time with a multi-frame gated camera. The results show that the screw movement of the electron beam is mainly limited in an area with a radius of 0.5 mm and the time-resolved diameters of the beam are 8.4 mm, 8.8 mm, 8.5 mm, 9.3 mm and 7.6 mm. These results have provided a very important support to several research areas such as beam trajectory tuning and beam transmission. (authors)

  15. Relation between medium fluid temperature and centroid subchannel temperatures of a nuclear fuel bundle mock-up

    International Nuclear Information System (INIS)

    Carvalho Tofani, P. de.

    1986-01-01

    The subchannel method used in nuclear fuel bundle thermal-hydraulic analysis lies in the statement that subchannel fluid temperatures are taken at mixed mean values. However, the development of mixing correlations and code assessment procedures are, sometimes in the literature, based upon the assumption of identity between lumped and local (subchannel centroid) temperature values. The present paper is concerned with the presentation of an approach for correlating lumped to centroid subchannel temperatures, based upon previously formulated models by the author, applied, applied to a nine heated tube bundle experimental data set. (Author) [pt

  16. Relation between medium fluid temperature and centroid subchannel temperatures of a nuclear fuel bundle mock-up

    International Nuclear Information System (INIS)

    Carvalho Tofani, P. de.

    1986-01-01

    The subchannel method used in nuclear fuel bundle thermal-hydraulic analysis lies in the statement that subchannel fluid temperatures are taken at mixed mean values. However, the development of mixing correlations and code assessment procedures are, sometimes in the literature, based upon the assumption of identity between lumped and local (subchannel centroid) temperature values. The present paper is concerned with the presentation of an approach for correlating lumped to centroid subchannel temperatures, based upon previously formulated models by the author, applied to a nine heated tube bundle experimental data set. (Author) [pt

  17. Communications: On artificial frequency shifts in infrared spectra obtained from centroid molecular dynamics: Quantum liquid water

    Science.gov (United States)

    Ivanov, Sergei D.; Witt, Alexander; Shiga, Motoyuki; Marx, Dominik

    2010-01-01

    Centroid molecular dynamics (CMD) is a popular method to extract approximate quantum dynamics from path integral simulations. Very recently we have shown that CMD gas phase infrared spectra exhibit significant artificial redshifts of stretching peaks, due to the so-called "curvature problem" imprinted by the effective centroid potential. Here we provide evidence that for condensed phases, and in particular for liquid water, CMD produces pronounced artificial redshifts for high-frequency vibrations such as the OH stretching band. This peculiar behavior intrinsic to the CMD method explains part of the unexpectedly large quantum redshifts of the stretching band of liquid water compared to classical frequencies, which is improved after applying a simple and rough "harmonic curvature correction."

  18. The effect of event shape on centroiding in photon counting detectors

    International Nuclear Information System (INIS)

    Kawakami, Hajime; Bone, David; Fordham, John; Michel, Raul

    1994-01-01

    High resolution, CCD readout, photon counting detectors employ simple centroiding algorithms for defining the spatial position of each detected event. The accuracy of centroiding is very dependent upon a number of parameters including the profile, energy and width of the intensified event. In this paper, we provide an analysis of how the characteristics of an intensified event change as the input count rate increases and the consequent effect on centroiding. The changes in these parameters are applied in particular to the MIC photon counting detector developed at UCL for ground and space based astronomical applications. This detector has a maximum format of 3072x2304 pixels permitting its use in the highest resolution applications. Individual events, at light level from 5 to 1000k events/s over the detector area, were analysed. It was found that both the asymmetry and width of event profiles were strongly dependent upon the energy of the intensified event. The variation in profile then affected the centroiding accuracy leading to loss of resolution. These inaccuracies have been quantified for two different 3 CCD pixel centroiding algorithms and one 2 pixel algorithm. The results show that a maximum error of less than 0.05 CCD pixel occurs with the 3 pixel algorithms and 0.1 CCD pixel for the 2 pixel algorithm. An improvement is proposed by utilising straight pore MCPs in the intensifier and a 70 μm air gap in front of the CCD. ((orig.))

  19. Intraoperative cyclorotation and pupil centroid shift during LASIK and PRK.

    Science.gov (United States)

    Narváez, Julio; Brucks, Matthew; Zimmerman, Grenith; Bekendam, Peter; Bacon, Gregory; Schmid, Kristin

    2012-05-01

    To determine the degree of cyclorotation and centroid shift in the x and y axis that occurs intraoperatively during LASIK and photorefractive keratectomy (PRK). Intraoperative cyclorotation and centroid shift were measured in 63 eyes from 34 patients with a mean age of 34 years (range: 20 to 56 years) undergoing either LASIK or PRK. Preoperatively, an iris image of each eye was obtained with the VISX WaveScan Wavefront System (Abbott Medical Optics Inc) with iris registration. A VISX Star S4 (Abbott Medical Optics Inc) laser was later used to measure cyclotorsion and pupil centroid shift at the beginning of the refractive procedure and after flap creation or epithelial removal. The mean change in intraoperative cyclorotation was 1.48±1.11° in LASIK eyes and 2.02±2.63° in PRK eyes. Cyclorotation direction changed by >2° in 21% of eyes after flap creation in LASIK and in 32% of eyes after epithelial removal in PRK. The respective mean intraoperative shift in the x axis and y axis was 0.13±0.15 mm and 0.17±0.14 mm, respectively, in LASIK eyes, and 0.09±0.07 mm and 0.10±0.13 mm, respectively, in PRK eyes. Intraoperative centroid shifts >100 μm in either the x axis or y axis occurred in 71% of LASIK eyes and 55% of PRK eyes. Significant changes in cyclotorsion and centroid shifts were noted prior to surgery as well as intraoperatively with both LASIK and PRK. It may be advantageous to engage iris registration immediately prior to ablation to provide a reference point representative of eye position at the initiation of laser delivery. Copyright 2012, SLACK Incorporated.

  20. A centroid model of species distribution with applications to the Carolina wren Thryothorus ludovicianus and house finch Haemorhous mexicanus in the United States

    Science.gov (United States)

    Huang, Qiongyu; Sauer, John R.; Swatantran, Anu; Dubayah, Ralph

    2016-01-01

    Drastic shifts in species distributions are a cause of concern for ecologists. Such shifts pose great threat to biodiversity especially under unprecedented anthropogenic and natural disturbances. Many studies have documented recent shifts in species distributions. However, most of these studies are limited to regional scales, and do not consider the abundance structure within species ranges. Developing methods to detect systematic changes in species distributions over their full ranges is critical for understanding the impact of changing environments and for successful conservation planning. Here, we demonstrate a centroid model for range-wide analysis of distribution shifts using the North American Breeding Bird Survey. The centroid model is based on a hierarchical Bayesian framework which models population change within physiographic strata while accounting for several factors affecting species detectability. Yearly abundance-weighted range centroids are estimated. As case studies, we derive annual centroids for the Carolina wren and house finch in their ranges in the U.S. We further evaluate the first-difference correlation between species’ centroid movement and changes in winter severity, total population abundance. We also examined associations of change in centroids from sub-ranges. Change in full-range centroid movements of Carolina wren significantly correlate with snow cover days (r = −0.58). For both species, the full-range centroid shifts also have strong correlation with total abundance (r = 0.65, and 0.51 respectively). The movements of the full-range centroids of the two species are correlated strongly (up to r = 0.76) with that of the sub-ranges with more drastic population changes. Our study demonstrates the usefulness of centroids for analyzing distribution changes in a two-dimensional spatial context. Particularly it highlights applications that associate the centroid with factors such as environmental stressors, population characteristics

  1. Combined centroid-envelope dynamics of intense, magnetically focused charged beams surrounded by conducting walls

    International Nuclear Information System (INIS)

    Fiuza, K.; Rizzato, F.B.; Pakter, R.

    2006-01-01

    In this paper we analyze the combined envelope-centroid dynamics of magnetically focused high-intensity charged beams surrounded by conducting walls. Similar to the case where conducting walls are absent, it is shown that the envelope and centroid dynamics decouple from each other. Mismatched envelopes still decay into equilibrium with simultaneous emittance growth, but the centroid keeps oscillating with no appreciable energy loss. Some estimates are performed to analytically obtain characteristics of halo formation seen in the full simulations

  2. Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar

    Science.gov (United States)

    Chang, Chi-Yung; Curlander, John C.

    1991-01-01

    Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.

  3. Prediction of RNA secondary structure using generalized centroid estimators.

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Sato, Kengo; Mituyama, Toutai; Asai, Kiyoshi

    2009-02-15

    Recent studies have shown that the methods for predicting secondary structures of RNAs on the basis of posterior decoding of the base-pairing probabilities has an advantage with respect to prediction accuracy over the conventionally utilized minimum free energy methods. However, there is room for improvement in the objective functions presented in previous studies, which are maximized in the posterior decoding with respect to the accuracy measures for secondary structures. We propose novel estimators which improve the accuracy of secondary structure prediction of RNAs. The proposed estimators maximize an objective function which is the weighted sum of the expected number of the true positives and that of the true negatives of the base pairs. The proposed estimators are also improved versions of the ones used in previous works, namely CONTRAfold for secondary structure prediction from a single RNA sequence and McCaskill-MEA for common secondary structure prediction from multiple alignments of RNA sequences. We clarify the relations between the proposed estimators and the estimators presented in previous works, and theoretically show that the previous estimators include additional unnecessary terms in the evaluation measures with respect to the accuracy. Furthermore, computational experiments confirm the theoretical analysis by indicating improvement in the empirical accuracy. The proposed estimators represent extensions of the centroid estimators proposed in Ding et al. and Carvalho and Lawrence, and are applicable to a wide variety of problems in bioinformatics. Supporting information and the CentroidFold software are available online at: http://www.ncrna.org/software/centroidfold/.

  4. User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination; TOPICAL

    International Nuclear Information System (INIS)

    BURKARDT, JOHN; GUNZBURGER, MAX; PETERSON, JANET; BRANNON, REBECCA M.

    2002-01-01

    The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported

  5. Finger vein identification using fuzzy-based k-nearest centroid neighbor classifier

    Science.gov (United States)

    Rosdi, Bakhtiar Affendi; Jaafar, Haryati; Ramli, Dzati Athiar

    2015-02-01

    In this paper, a new approach for personal identification using finger vein image is presented. Finger vein is an emerging type of biometrics that attracts attention of researchers in biometrics area. As compared to other biometric traits such as face, fingerprint and iris, finger vein is more secured and hard to counterfeit since the features are inside the human body. So far, most of the researchers focus on how to extract robust features from the captured vein images. Not much research was conducted on the classification of the extracted features. In this paper, a new classifier called fuzzy-based k-nearest centroid neighbor (FkNCN) is applied to classify the finger vein image. The proposed FkNCN employs a surrounding rule to obtain the k-nearest centroid neighbors based on the spatial distributions of the training images and their distance to the test image. Then, the fuzzy membership function is utilized to assign the test image to the class which is frequently represented by the k-nearest centroid neighbors. Experimental evaluation using our own database which was collected from 492 fingers shows that the proposed FkNCN has better performance than the k-nearest neighbor, k-nearest-centroid neighbor and fuzzy-based-k-nearest neighbor classifiers. This shows that the proposed classifier is able to identify the finger vein image effectively.

  6. Estimating the Doppler centroid of SAR data

    DEFF Research Database (Denmark)

    Madsen, Søren Nørvang

    1989-01-01

    attractive properties. An evaluation based on an existing SEASAT processor is reported. The time-domain algorithms are shown to be extremely efficient with respect to requirements on calculations and memory, and hence they are well suited to real-time systems where the Doppler estimation is based on raw SAR......After reviewing frequency-domain techniques for estimating the Doppler centroid of synthetic-aperture radar (SAR) data, the author describes a time-domain method and highlights its advantages. In particular, a nonlinear time-domain algorithm called the sign-Doppler estimator (SDE) is shown to have...... data. For offline processors where the Doppler estimation is performed on processed data, which removes the problem of partial coverage of bright targets, the ΔE estimator and the CDE (correlation Doppler estimator) algorithm give similar performance. However, for nonhomogeneous scenes it is found...

  7. Improved measurements of RNA structure conservation with generalized centroid estimators

    Directory of Open Access Journals (Sweden)

    Yohei eOkada

    2011-08-01

    Full Text Available Identification of non-protein-coding RNAs (ncRNAs in genomes is acrucial task for not only molecular cell biology but alsobioinformatics. Secondary structures of ncRNAs are employed as a keyfeature of ncRNA analysis since biological functions of ncRNAs aredeeply related to their secondary structures. Although the minimumfree energy (MFE structure of an RNA sequence is regarded as the moststable structure, MFE alone could not be an appropriate measure foridentifying ncRNAs since the free energy is heavily biased by thenucleotide composition. Therefore, instead of MFE itself, severalalternative measures for identifying ncRNAs have been proposed such asthe structure conservation index (SCI and the base pair distance(BPD, both of which employ MFE structures. However, thesemeasurements are unfortunately not suitable for identifying ncRNAs insome cases including the genome-wide search and incur high falsediscovery rate. In this study, we propose improved measurements basedon SCI and BPD, applying generalized centroid estimators toincorporate the robustness against low quality multiple alignments.Our experiments show that our proposed methods achieve higher accuracythan the original SCI and BPD for not only human-curated structuralalignments but also low quality alignments produced by CLUSTALW. Furthermore, the centroid-based SCI on CLUSTAL W alignments is moreaccurate than or comparable with that of the original SCI onstructural alignments generated with RAF, a high quality structuralaligner, for which two-fold expensive computational time is requiredon average. We conclude that our methods are more suitable forgenome-wide alignments which are of low quality from the point of viewon secondary structures than the original SCI and BPD.

  8. Feature selection and nearest centroid classification for protein mass spectrometry

    Directory of Open Access Journals (Sweden)

    Levner Ilya

    2005-03-01

    selection methods using the nearest centroid classifier and found that several reportedly state-of-the-art algorithms in fact perform rather poorly when tested via stratified cross-validation. The revealed inconsistencies provide clear evidence that algorithm evaluation should be performed on several data sets using a consistent (i.e., non-randomized, stratified cross-validation procedure in order for the conclusions to be statistically sound.

  9. Robustness of regularities for energy centroids in the presence of random interactions

    International Nuclear Information System (INIS)

    Zhao, Y.M.; Arima, A.; Yoshida, N.; Ogawa, K.; Yoshinaga, N.; Kota, V. K. B.

    2005-01-01

    In this paper we study energy centroids such as those with fixed spin and isospin and those with fixed irreducible representations for both bosons and fermions, in the presence of random two-body and/or three-body interactions. Our results show that regularities of energy centroids of fixed-spin states reported in earlier works are very robust in these more complicated cases. We suggest that these behaviors might be intrinsic features of quantum many-body systems interacting by random forces

  10. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    An immediate consequence of the definition of the p-centroid body of K is that for any .... The dual mixed volume ˜V−p(K, L) of star bodies K, L can be defined by d. −p ..... [16] Lindenstrauss J and Milman V D, Local theory of normed spaces and ...

  11. Empirical Centroid Fictitious Play: An Approach For Distributed Learning In Multi-Agent Games

    OpenAIRE

    Swenson, Brian; Kar, Soummya; Xavier, Joao

    2013-01-01

    The paper is concerned with distributed learning in large-scale games. The well-known fictitious play (FP) algorithm is addressed, which, despite theoretical convergence results, might be impractical to implement in large-scale settings due to intense computation and communication requirements. An adaptation of the FP algorithm, designated as the empirical centroid fictitious play (ECFP), is presented. In ECFP players respond to the centroid of all players' actions rather than track and respo...

  12. Non-obtuse Remeshing with Centroidal Voronoi Tessellation

    KAUST Repository

    Yan, Dongming; Wonka, Peter

    2015-01-01

    We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.

  13. Non-obtuse Remeshing with Centroidal Voronoi Tessellation

    KAUST Repository

    Yan, Dongming

    2015-12-03

    We present a novel remeshing algorithm that avoids triangles with small and triangles with large (obtuse) angles. Our solution is based on an extension to Centroidal Voronoi Tesselation (CVT). We augment the original CVT formulation by a penalty term that penalizes short Voronoi edges, while the CVT term helps to avoid small angles. Our results show significant improvements of the remeshing quality over the state of the art.

  14. Determination of star bodies from p-centroid bodies

    Indian Academy of Sciences (India)

    In this paper, we prove that an origin-symmetric star body is uniquely determined by its -centroid body. Furthermore, using spherical harmonics, we establish a result for non-symmetric star bodies. As an application, we show that there is a unique member of p ⟨ K ⟩ characterized by having larger volume than any other ...

  15. Establishing the soft and hard tissue area centers (centroids) for the skull and introducing a newnon-anatomical cephalometric line

    International Nuclear Information System (INIS)

    AlBalkhi, Khalid M; AlShahrani, Ibrahim; AlMadi, Abdulaziz

    2008-01-01

    The purpose of this study was to demonstrate how to establish the area center (centroid) of both the soft and hard tissues of the outline of the lateral cephalometric skull image, and to introduce the concept of a new non-anatomical centroid line. Lateral cephalometric radiographs, size 12 x 14 inch, of fifty seven adult subjects were selected based on their pleasant, balanced profile, Class I skeletal and dental relationship and no major dental malocclusion or malrelationship. The area centers (centroids) of both soft and hard tissue skull were practically established using a customized software computer program called the m -file . Connecting the two centers introduced the concept of a new non-anatomical soft and hard centroids line. (author)

  16. An Investigation on the Use of Different Centroiding Algorithms and Star Catalogs in Astro-Geodetic Observations

    Science.gov (United States)

    Basoglu, Burak; Halicioglu, Kerem; Albayrak, Muge; Ulug, Rasit; Tevfik Ozludemir, M.; Deniz, Rasim

    2017-04-01

    In the last decade, the importance of high-precise geoid determination at local or national level has been pointed out by Turkish National Geodesy Commission. The Commission has also put objective of modernization of national height system of Turkey to the agenda. Meanwhile several projects have been realized in recent years. In Istanbul city, a GNSS/Levelling geoid was defined in 2005 for the metropolitan area of the city with an accuracy of ±3.5cm. In order to achieve a better accuracy in this area, "Local Geoid Determination with Integration of GNSS/Levelling and Astro-Geodetic Data" project has been conducted in Istanbul Technical University and Bogazici University KOERI since January 2016. The project is funded by The Scientific and Technological Research Council of Turkey. With the scope of the project, modernization studies of Digital Zenith Camera System are being carried on in terms of hardware components and software development. Accentuated subjects are the star catalogues, and centroiding algorithm used to identify the stars on the zenithal star field. During the test observations of Digital Zenith Camera System performed between 2013-2016, final results were calculated using the PSF method for star centroiding, and the second USNO CCD Astrograph Catalogue (UCAC2) for the reference star positions. This study aims to investigate the position accuracy of the star images by comparing different centroiding algorithms and available star catalogs used in astro-geodetic observations conducted with the digital zenith camera system.

  17. A general centroid determination methodology, with application to multilayer dielectric structures and thermally stimulated current measurements

    International Nuclear Information System (INIS)

    Miller, S.L.; Fleetwood, D.M.; McWhorter, P.J.; Reber, R.A. Jr.; Murray, J.R.

    1993-01-01

    A general methodology is developed to experimentally characterize the spatial distribution of occupied traps in dielectric films on a semiconductor. The effects of parasitics such as leakage, charge transport through more than one interface, and interface trap charge are quantitatively addressed. Charge transport with contributions from multiple charge species is rigorously treated. The methodology is independent of the charge transport mechanism(s), and is directly applicable to multilayer dielectric structures. The centroid capacitance, rather than the centroid itself, is introduced as the fundamental quantity that permits the generic analysis of multilayer structures. In particular, the form of many equations describing stacked dielectric structures becomes independent of the number of layers comprising the stack if they are expressed in terms of the centroid capacitance and/or the flatband voltage. The experimental methodology is illustrated with an application using thermally stimulated current (TSC) measurements. The centroid of changes (via thermal emission) in the amount of trapped charge was determined for two different samples of a triple-layer dielectric structure. A direct consequence of the TSC analyses is the rigorous proof that changes in interface trap charge can contribute, though typically not significantly, to thermally stimulated current

  18. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  19. Simplex-centroid mixture formulation for optimised composting of kitchen waste.

    Science.gov (United States)

    Abdullah, N; Chin, N L

    2010-11-01

    Composting is a good recycling method to fully utilise all the organic wastes present in kitchen waste due to its high nutritious matter within the waste. In this present study, the optimised mixture proportions of kitchen waste containing vegetable scraps (V), fish processing waste (F) and newspaper (N) or onion peels (O) were determined by applying the simplex-centroid mixture design method to achieve the desired initial moisture content and carbon-to-nitrogen (CN) ratio for effective composting process. The best mixture was at 48.5% V, 17.7% F and 33.7% N for blends with newspaper while for blends with onion peels, the mixture proportion was 44.0% V, 19.7% F and 36.2% O. The predicted responses from these mixture proportions fall in the acceptable limits of moisture content of 50% to 65% and CN ratio of 20-40 and were also validated experimentally. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart's Red Bird

    Directory of Open Access Journals (Sweden)

    Roger T. Dean

    2011-12-01

    Full Text Available Pearce (2011 provides a positive and interesting response to our article on time series analysis of the influences of acoustic properties on real-time perception of structure and affect in a section of Trevor Wishart’s Red Bird (Dean & Bailes, 2010. We address the following topics raised in the response and our paper. First, we analyse in depth the possible influence of spectral centroid, a timbral feature of the acoustic stream distinct from the high level general parameter we used initially, spectral flatness. We find that spectral centroid, like spectral flatness, is not a powerful predictor of real-time responses, though it does show some features that encourage its continued consideration. Second, we discuss further the issue of studying both individual responses, and as in our paper, group averaged responses. We show that a multivariate Vector Autoregression model handles the grand average series quite similarly to those of individual members of our participant groups, and we analyse this in greater detail with a wide range of approaches in work which is in press and continuing. Lastly, we discuss the nature and intent of computational modelling of cognition using acoustic and music- or information theoretic data streams as predictors, and how the music- or information theoretic approaches may be applied to electroacoustic music, which is ‘sound-based’ rather than note-centred like Western classical music.

  1. Centroid and Envelope Eynamics of Charged Particle Beams in an Oscillating Wobbler and External Focusing Lattice for Heavy Ion Fusion Applications

    International Nuclear Information System (INIS)

    Davidson, Ronald C.; Logan, B. Grant

    2011-01-01

    Recent heavy ion fusion target studies show that it is possible to achieve ignition with direct drive and energy gain larger than 100 at 1MJ. To realize these advanced, high-gain schemes based on direct drive, it is necessary to develop a reliable beam smoothing technique to mitigate instabilities and facilitate uniform deposition on the target. The dynamics of the beam centroid can be explored as a possible beam smoothing technique to achieve a uniform illumination over a suitably chosen region of the target. The basic idea of this technique is to induce an oscillatory motion of the centroid for each transverse slice of the beam in such a way that the centroids of different slices strike different locations on the target. The centroid dynamics is controlled by a set of biased electrical plates called 'wobblers'. Using a model based on moments of the Vlasov-Maxwell equations, we show that the wobbler deflection force acts only on the centroid motion, and that the envelope dynamics are independent of the wobbler fields. If the conducting wall is far away from the beam, then the envelope dynamics and centroid dynamics are completely decoupled. This is a preferred situation for the beam wobbling technique, because the wobbler system can be designed to generate the desired centroid motion on the target without considering its effects on the envelope and emittance. A conceptual design of the wobbler system for a heavy ion fusion driver is briefly summarized.

  2. A Novel Approach Based on MEMS-Gyro's Data Deep Coupling for Determining the Centroid of Star Spot

    Directory of Open Access Journals (Sweden)

    Xing Fei

    2012-01-01

    Full Text Available The traditional approach of star tracker for determining the centroid of spot requires enough energy and good shape, so a relatively long exposure time and stable three-axis state become necessary conditions to maintain high accuracy, these limit its update rate and dynamic performance. In view of these issues, this paper presents an approach for determining the centroid of star spot which based on MEMS-Gyro's data deep coupling, it achieves the deep fusion of the data of star tracker and MEMS-Gyro at star map level through the introduction of EKF. The trajectory predicted by using the angular velocity of three axes can be used to set the extraction window, this enhances the dynamic performance because of the accurate extraction when the satellite has angular speed. The optimal estimations of the centroid position and the drift in the output signal of MEMS-Gyro through this approach reduce the influence of noise of the detector on accuracy of the traditional approach for determining the centroid and effectively correct the output signal of MEMS-Gyro. At the end of this paper, feasibility of this approach is verified by simulation.

  3. A double inequality for bounding Toader mean by the centroidal mean

    Indian Academy of Sciences (India)

    A double inequality for bounding Toader mean by the centroidal mean. YUN HUA1,∗ and FENG QI2. 1Department of Information Engineering, Weihai Vocational College, Weihai City,. Shandong Province 264210, China. 2College of Mathematics, Inner Mongolia University for Nationalities, Tongliao City,. Inner Mongolia ...

  4. A double inequality for bounding Toader mean by the centroidal mean

    Indian Academy of Sciences (India)

    Annual Meetings · Mid Year Meetings · Discussion Meetings · Public Lectures · Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Proceedings – Mathematical Sciences; Volume 124; Issue 4. A double inequality for bounding Toader mean by the centroidal mean. Yun Hua Feng Qi.

  5. Centroid Localization of Uncooperative Nodes in Wireless Networks Using a Relative Span Weighting Method

    Directory of Open Access Journals (Sweden)

    Christine Laurendeau

    2010-01-01

    Full Text Available Increasingly ubiquitous wireless technologies require novel localization techniques to pinpoint the position of an uncooperative node, whether the target is a malicious device engaging in a security exploit or a low-battery handset in the middle of a critical emergency. Such scenarios necessitate that a radio signal source be localized by other network nodes efficiently, using minimal information. We propose two new algorithms for estimating the position of an uncooperative transmitter, based on the received signal strength (RSS of a single target message at a set of receivers whose coordinates are known. As an extension to the concept of centroid localization, our mechanisms weigh each receiver's coordinates based on the message's relative RSS at that receiver, with respect to the span of RSS values over all receivers. The weights may decrease from the highest RSS receiver either linearly or exponentially. Our simulation results demonstrate that for all but the most sparsely populated wireless networks, our exponentially weighted mechanism localizes a target node within the regulations stipulated for emergency services location accuracy.

  6. The Single-Molecule Centroid Localization Algorithm Improves the Accuracy of Fluorescence Binding Assays.

    Science.gov (United States)

    Hua, Boyang; Wang, Yanbo; Park, Seongjin; Han, Kyu Young; Singh, Digvijay; Kim, Jin H; Cheng, Wei; Ha, Taekjip

    2018-03-13

    Here, we demonstrate that the use of the single-molecule centroid localization algorithm can improve the accuracy of fluorescence binding assays. Two major artifacts in this type of assay, i.e., nonspecific binding events and optically overlapping receptors, can be detected and corrected during analysis. The effectiveness of our method was confirmed by measuring two weak biomolecular interactions, the interaction between the B1 domain of streptococcal protein G and immunoglobulin G and the interaction between double-stranded DNA and the Cas9-RNA complex with limited sequence matches. This analysis routine requires little modification to common experimental protocols, making it readily applicable to existing data and future experiments.

  7. Observations of sensor bias dependent cluster centroid shifts in a prototype sensor for the LHCb Vertex Locator detector

    CERN Document Server

    Papadelis, Aras

    2006-01-01

    We present results from a recent beam test of a prototype sensor for the LHCb Vertex Locator detector, read out with the Beetle 1.3 front-end chip. We have studied the effect of the sensor bias voltage on the reconstructed cluster positions in a sensor placed in a 120GeV pion beam at a 10° incidence angle. We find an unexplained sysematic shift in the reconstructed cluster centroid when increasing the bias voltage on an already overdepleted sensor. The shift is independent of strip pitch and sensor thickness.

  8. Centroid and Theoretical Rotation: Justification for Their Use in Q Methodology Research

    Science.gov (United States)

    Ramlo, Sue

    2016-01-01

    This manuscript's purpose is to introduce Q as a methodology before providing clarification about the preferred factor analytical choices of centroid and theoretical (hand) rotation. Stephenson, the creator of Q, designated that only these choices allowed for scientific exploration of subjectivity while not violating assumptions associated with…

  9. Target Centroid Position Estimation of Phase-Path Volume Kalman Filtering

    Directory of Open Access Journals (Sweden)

    Fengjun Hu

    2016-01-01

    Full Text Available For the problem of easily losing track target when obstacles appear in intelligent robot target tracking, this paper proposes a target tracking algorithm integrating reduced dimension optimal Kalman filtering algorithm based on phase-path volume integral with Camshift algorithm. After analyzing the defects of Camshift algorithm, compare the performance with the SIFT algorithm and Mean Shift algorithm, and Kalman filtering algorithm is used for fusion optimization aiming at the defects. Then aiming at the increasing amount of calculation in integrated algorithm, reduce dimension with the phase-path volume integral instead of the Gaussian integral in Kalman algorithm and reduce the number of sampling points in the filtering process without influencing the operational precision of the original algorithm. Finally set the target centroid position from the Camshift algorithm iteration as the observation value of the improved Kalman filtering algorithm to fix predictive value; thus to make optimal estimation of target centroid position and keep the target tracking so that the robot can understand the environmental scene and react in time correctly according to the changes. The experiments show that the improved algorithm proposed in this paper shows good performance in target tracking with obstructions and reduces the computational complexity of the algorithm through the dimension reduction.

  10. Automatic localization of the left ventricular blood pool centroid in short axis cardiac cine MR images.

    Science.gov (United States)

    Tan, Li Kuo; Liew, Yih Miin; Lim, Einly; Abdul Aziz, Yang Faridah; Chee, Kok Han; McLaughlin, Robert A

    2018-06-01

    In this paper, we develop and validate an open source, fully automatic algorithm to localize the left ventricular (LV) blood pool centroid in short axis cardiac cine MR images, enabling follow-on automated LV segmentation algorithms. The algorithm comprises four steps: (i) quantify motion to determine an initial region of interest surrounding the heart, (ii) identify potential 2D objects of interest using an intensity-based segmentation, (iii) assess contraction/expansion, circularity, and proximity to lung tissue to score all objects of interest in terms of their likelihood of constituting part of the LV, and (iv) aggregate the objects into connected groups and construct the final LV blood pool volume and centroid. This algorithm was tested against 1140 datasets from the Kaggle Second Annual Data Science Bowl, as well as 45 datasets from the STACOM 2009 Cardiac MR Left Ventricle Segmentation Challenge. Correct LV localization was confirmed in 97.3% of the datasets. The mean absolute error between the gold standard and localization centroids was 2.8 to 4.7 mm, or 12 to 22% of the average endocardial radius. Graphical abstract Fully automated localization of the left ventricular blood pool in short axis cardiac cine MR images.

  11. The strengths and limitations of effective centroid force models explored by studying isotopic effects in liquid water

    Science.gov (United States)

    Yuan, Ying; Li, Jicun; Li, Xin-Zheng; Wang, Feng

    2018-05-01

    The development of effective centroid potentials (ECPs) is explored with both the constrained-centroid and quasi-adiabatic force matching using liquid water as a test system. A trajectory integrated with the ECP is free of statistical noises that would be introduced when the centroid potential is approximated on the fly with a finite number of beads. With the reduced cost of ECP, challenging experimental properties can be studied in the spirit of centroid molecular dynamics. The experimental number density of H2O is 0.38% higher than that of D2O. With the ECP, the H2O number density is predicted to be 0.42% higher, when the dispersion term is not refit. After correction of finite size effects, the diffusion constant of H2O is found to be 21% higher than that of D2O, which is in good agreement with the 29.9% higher diffusivity for H2O observed experimentally. Although the ECP is also able to capture the redshifts of both the OH and OD stretching modes in liquid water, there are a number of properties that a classical simulation with the ECP will not be able to recover. For example, the heat capacities of H2O and D2O are predicted to be almost identical and higher than the experimental values. Such a failure is simply a result of not properly treating quantized vibrational energy levels when the trajectory is propagated with classical mechanics. Several limitations of the ECP based approach without bead population reconstruction are discussed.

  12. Hough transform used on the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor

    Science.gov (United States)

    Chia, Chou-Min; Huang, Kuang-Yuh; Chang, Elmer

    2016-01-01

    An approach to the spot-centroiding algorithm for the Shack-Hartmann wavefront sensor (SHWS) is presented. The SHWS has a common problem, in that while measuring high-order wavefront distortion, the spots may exceed each of the subapertures, which are used to restrict the displacement of spots. This artificial restriction may limit the dynamic range of the SHWS. When using the SHWS to measure adaptive optics or aspheric lenses, the accuracy of the traditional spot-centroiding algorithm may be uncertain because the spots leave or cross the confined area of the subapertures. The proposed algorithm combines the Hough transform with an artificial neural network, which requires no confined subapertures, to increase the dynamic range of the SHWS. This algorithm is then explored in comprehensive simulations and the results are compared with those of the existing algorithm.

  13. CentroidFold: a web server for RNA secondary structure prediction

    OpenAIRE

    Sato, Kengo; Hamada, Michiaki; Asai, Kiyoshi; Mituyama, Toutai

    2009-01-01

    The CentroidFold web server (http://www.ncrna.org/centroidfold/) is a web application for RNA secondary structure prediction powered by one of the most accurate prediction engine. The server accepts two kinds of sequence data: a single RNA sequence and a multiple alignment of RNA sequences. It responses with a prediction result shown as a popular base-pair notation and a graph representation. PDF version of the graph representation is also available. For a multiple alignment sequence, the ser...

  14. A Proposal to Speed up the Computation of the Centroid of an Interval Type-2 Fuzzy Set

    Directory of Open Access Journals (Sweden)

    Carlos E. Celemin

    2013-01-01

    Full Text Available This paper presents two new algorithms that speed up the centroid computation of an interval type-2 fuzzy set. The algorithms include precomputation of the main operations and initialization based on the concept of uncertainty bounds. Simulations over different kinds of footprints of uncertainty reveal that the new algorithms achieve computation time reductions with respect to the Enhanced-Karnik algorithm, ranging from 40 to 70%. The results suggest that the initialization used in the new algorithms effectively reduces the number of iterations to compute the extreme points of the interval centroid while precomputation reduces the computational cost of each iteration.

  15. A New Scrambling Evaluation Scheme Based on Spatial Distribution Entropy and Centroid Difference of Bit-Plane

    Science.gov (United States)

    Zhao, Liang; Adhikari, Avishek; Sakurai, Kouichi

    Watermarking is one of the most effective techniques for copyright protection and information hiding. It can be applied in many fields of our society. Nowadays, some image scrambling schemes are used as one part of the watermarking algorithm to enhance the security. Therefore, how to select an image scrambling scheme and what kind of the image scrambling scheme may be used for watermarking are the key problems. Evaluation method of the image scrambling schemes can be seen as a useful test tool for showing the property or flaw of the image scrambling method. In this paper, a new scrambling evaluation system based on spatial distribution entropy and centroid difference of bit-plane is presented to obtain the scrambling degree of image scrambling schemes. Our scheme is illustrated and justified through computer simulations. The experimental results show (in Figs. 6 and 7) that for the general gray-scale image, the evaluation degree of the corresponding cipher image for the first 4 significant bit-planes selection is nearly the same as that for the 8 bit-planes selection. That is why, instead of taking 8 bit-planes of a gray-scale image, it is sufficient to take only the first 4 significant bit-planes for the experiment to find the scrambling degree. This 50% reduction in the computational cost makes our scheme efficient.

  16. Optimization of soy isoflavone extraction with different solvents using the simplex-centroid mixture design.

    Science.gov (United States)

    Yoshiara, Luciane Yuri; Madeira, Tiago Bervelieri; Delaroza, Fernanda; da Silva, Josemeyre Bonifácio; Ida, Elza Iouko

    2012-12-01

    The objective of this study was to optimize the extraction of different isoflavone forms (glycosidic, malonyl-glycosidic, aglycone and total) from defatted cotyledon soy flour using the simplex-centroid experimental design with four solvents of varying polarity (water, acetone, ethanol and acetonitrile). The obtained extracts were then analysed by high-performance liquid chromatography. The profile of the different soy isoflavones forms varied with different extractions solvents. Varying the solvent or mixture used, the extraction of different isoflavones was optimized using the centroid-simplex mixture design. The special cubic model best fitted to the four solvents and its combination for soy isoflavones extraction. For glycosidic isoflavones extraction, the polar ternary mixture (water, acetone and acetonitrile) achieved the best extraction; malonyl-glycosidic forms were better extracted with mixtures of water, acetone and ethanol. Aglycone isoflavones, water and acetone mixture were best extracted and total isoflavones, the best solvents were ternary mixture of water, acetone and ethanol.

  17. 4fn-15d centroid shift in lanthanides and relation with anion polarizability, covalency, and cation electronegativity

    International Nuclear Information System (INIS)

    Dorenbos, P.; Andriessen, J.; Eijk, C.W.E. van

    2003-01-01

    Data collected on the centroid shift of the 5d-configuration of Ce 3+ in oxide and fluoride compounds were recently analyzed with a model involving the correlated motion between 5d-electron and ligand electrons. The correlation effects are proportional to the polarizability of the anion ligands and it leads, like covalency, to lowering of the 5d-orbital energies. By means of ab initio Hartree-Fock-LCAO calculations including configuration interaction the contribution from covalency and correlated motion to the centroid shift are determined separately for Ce 3+ in various compounds. It will be shown that in fluoride compounds, covalency provides an insignificant contribution. In oxides, polarizability appears to be of comparable importance as covalency

  18. Oscillations of centroid position and surface area of soccer teams in small-sided games

    NARCIS (Netherlands)

    Frencken, Wouter; Lemmink, Koen; Delleman, Nico; Visscher, Chris

    2011-01-01

    There is a need for a collective variable that captures the dynamics of team sports like soccer at match level. The centroid positions and surface areas of two soccer teams potentially describe the coordinated flow of attacking and defending in small-sided soccer games at team level. The aim of the

  19. Centroid based clustering of high throughput sequencing reads based on n-mer counts.

    Science.gov (United States)

    Solovyov, Alexander; Lipkin, W Ian

    2013-09-08

    Many problems in computational biology require alignment-free sequence comparisons. One of the common tasks involving sequence comparison is sequence clustering. Here we apply methods of alignment-free comparison (in particular, comparison using sequence composition) to the challenge of sequence clustering. We study several centroid based algorithms for clustering sequences based on word counts. Study of their performance shows that using k-means algorithm with or without the data whitening is efficient from the computational point of view. A higher clustering accuracy can be achieved using the soft expectation maximization method, whereby each sequence is attributed to each cluster with a specific probability. We implement an open source tool for alignment-free clustering. It is publicly available from github: https://github.com/luscinius/afcluster. We show the utility of alignment-free sequence clustering for high throughput sequencing analysis despite its limitations. In particular, it allows one to perform assembly with reduced resources and a minimal loss of quality. The major factor affecting performance of alignment-free read clustering is the length of the read.

  20. THE ENERGY DEPENDENCE OF THE CENTROID FREQUENCY AND PHASE LAG OF THE QUASI-PERIODIC OSCILLATIONS IN GRS 1915+105

    International Nuclear Information System (INIS)

    Qu, J. L.; Lu, F. J.; Lu, Y.; Song, L. M.; Zhang, S.; Wang, J. M.; Ding, G. Q.

    2010-01-01

    We present a study of the centroid frequencies and phase lags of quasi-periodic oscillations (QPOs) as functions of photon energy for GRS 1915+105. It is found that the centroid frequencies of the 0.5-10 Hz QPOs and their phase lags are both energy dependent, and there exists an anticorrelation between the QPO frequency and phase lag. These new results challenge the popular QPO models, because none of them can fully explain the observed properties. We suggest that the observed QPO phase lags are partially due to the variation of the QPO frequency with energy, especially for those with frequency higher than 3.5 Hz.

  1. A Reference Point Construction Method Using Mobile Terminals and the Indoor Localization Evaluation in the Centroid Method

    Directory of Open Access Journals (Sweden)

    Takahiro Yamaguchi

    2015-05-01

    Full Text Available As smartphones become widespread, a variety of smartphone applications are being developed. This paper proposes a method for indoor localization (i.e., positioning that uses only smartphones, which are general-purpose mobile terminals, as reference point devices. This method has the following features: (a the localization system is built with smartphones whose movements are confined to respective limited areas. No fixed reference point devices are used; (b the method does not depend on the wireless performance of smartphones and does not require information about the propagation characteristics of the radio waves sent from reference point devices, and (c the method determines the location at the application layer, at which location information can be easily incorporated into high-level services. We have evaluated the level of localization accuracy of the proposed method by building a software emulator that modeled an underground shopping mall. We have confirmed that the determined location is within a small area in which the user can find target objects visually.

  2. The centroid shift of the 5d levels of Ce3+ with respect to the 4f levels in ionic crystals, a theoretical investigation

    International Nuclear Information System (INIS)

    Andriessen, J.; Dorenbos, P.; Eijk, C.W.E van

    2002-01-01

    The centroid shifts of the 5d level of Ce 3+ in BaF 2 , LaAlO 3 and LaCl 3 have been calculated using the ionic cluster approach. By applying configuration interaction as extension of the basic HF-LCAO approach the dynamical polarization contribution to the centroid shift was calculated. This was found to be only successful if basis sets are used optimized for polarization of the anions

  3. Centroid moment tensor catalogue using a 3-D continental scale Earth model: Application to earthquakes in Papua New Guinea and the Solomon Islands

    Science.gov (United States)

    Hejrani, Babak; Tkalčić, Hrvoje; Fichtner, Andreas

    2017-07-01

    Although both earthquake mechanism and 3-D Earth structure contribute to the seismic wavefield, the latter is usually assumed to be layered in source studies, which may limit the quality of the source estimate. To overcome this limitation, we implement a method that takes advantage of a 3-D heterogeneous Earth model, recently developed for the Australasian region. We calculate centroid moment tensors (CMTs) for earthquakes in Papua New Guinea (PNG) and the Solomon Islands. Our method is based on a library of Green's functions for each source-station pair for selected Geoscience Australia and Global Seismic Network stations in the region, and distributed on a 3-D grid covering the seismicity down to 50 km depth. For the calculation of Green's functions, we utilize a spectral-element method for the solution of the seismic wave equation. Seismic moment tensors were calculated using least squares inversion, and the 3-D location of the centroid is found by grid search. Through several synthetic tests, we confirm a trade-off between the location and the correct input moment tensor components when using a 1-D Earth model to invert synthetics produced in a 3-D heterogeneous Earth. Our CMT catalogue for PNG in comparison to the global CMT shows a meaningful increase in the double-couple percentage (up to 70%). Another significant difference that we observe is in the mechanism of events with depth shallower then 15 km and Mw region.

  4. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  5. Quantum size correction to the work function and centroid of excess charge in positively ionized simple metal clusters

    International Nuclear Information System (INIS)

    Payami, M.

    2004-01-01

    In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different r s values (2≤ r s ≥ 7). For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes 2≤ N ≥100 in the framework of local spin-density approximation and stabilized jellium model as well as simple jellium model with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere

  6. Demonstration of biased membrane static figure mapping by optical beam subpixel centroid shift

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Fabrizio, E-mail: fpinto@jazanu.edu.sa [Laboratory for Quantum Vacuum Applications, Department of Physics, Faculty of Science, Jazan University, P.O. Box 114, Gizan 45142 (Saudi Arabia)

    2016-06-10

    The measurement of Casimir forces by means of condenser microphones has been shown to be quite promising since its early introduction almost half-a-century ago. However, unlike the remarkable progress achieved in characterizing the vibrating membrane in the dynamical case, the accurate determination of the membrane static figure under electrostatic bias remains a challenge. In this paper, we discuss our first data obtained by measuring the centroid shift of an optical beam with subpixel accuracy by charge coupled device (CCD) and by an extensive analysis of noise sources present in the experimental setup.

  7. Quantum size correction to the work function and the centroid of excess charge in positively ionized simple metal clusters

    Directory of Open Access Journals (Sweden)

    M. Payami

    2003-12-01

    Full Text Available  In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different values . For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes in the framework of local spin-density approximation and stabilized jellium model (SJM as well as simple jellium model (JM with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere.

  8. The centroidal algorithm in molecular similarity and diversity calculations on confidential datasets

    Science.gov (United States)

    Trepalin, Sergey; Osadchiy, Nikolay

    2005-09-01

    Chemical structure provides exhaustive description of a compound, but it is often proprietary and thus an impediment in the exchange of information. For example, structure disclosure is often needed for the selection of most similar or dissimilar compounds. Authors propose a centroidal algorithm based on structural fragments (screens) that can be efficiently used for the similarity and diversity selections without disclosing structures from the reference set. For an increased security purposes, authors recommend that such set contains at least some tens of structures. Analysis of reverse engineering feasibility showed that the problem difficulty grows with decrease of the screen's radius. The algorithm is illustrated with concrete calculations on known steroidal, quinoline, and quinazoline drugs. We also investigate a problem of scaffold identification in combinatorial library dataset. The results show that relatively small screens of radius equal to 2 bond lengths perform well in the similarity sorting, while radius 4 screens yield better results in diversity sorting. The software implementation of the algorithm taking SDF file with a reference set generates screens of various radii which are subsequently used for the similarity and diversity sorting of external SDFs. Since the reverse engineering of the reference set molecules from their screens has the same difficulty as the RSA asymmetric encryption algorithm, generated screens can be stored openly without further encryption. This approach ensures an end user transfers only a set of structural fragments and no other data. Like other algorithms of encryption, the centroid algorithm cannot give 100% guarantee of protecting a chemical structure from dataset, but probability of initial structure identification is very small-order of 10-40 in typical cases.

  9. A NEW APPLICATION OF THE ASTROMETRIC METHOD TO BREAK SEVERE DEGENERACIES IN BINARY MICROLENSING EVENTS

    International Nuclear Information System (INIS)

    Chung, Sun-Ju; Park, Byeong-Gon; Humphrey, Andrew; Ryu, Yoon-Hyun

    2009-01-01

    When a source star is microlensed by one stellar component of widely separated binary stellar components, after finishing the lensing event, the event induced by the other binary star can be additionally detected. In this paper, we investigate whether the close/wide degeneracies in binary lensing events can be resolved by detecting the additional centroid shift of the source images induced by the secondary binary star in wide binary lensing events. From this investigation, we find that if the source star passes close to the Einstein ring of the secondary companion, the degeneracy can be easily resolved by using future astrometric follow-up observations with high astrometric precision. We determine the probability of detecting the additional centroid shift in binary lensing events with high magnification. From this, we find that the degeneracy of binary lensing events with a separation of ∼<20.0 AU can be resolved with a significant efficiency. We also estimate the waiting time for the detection of the additional centroid shift in wide binary lensing events. We find that for typical Galactic lensing events with a separation of ∼<20.0 AU, the additional centroid shift can be detected within 100 days, and thus the degeneracy of those events can be sufficiently broken within a year.

  10. Experimental Test of Data Analysis Methods from Staggered Pair X-ray Beam Position Monitors at Bending Magnet Beamlines

    Science.gov (United States)

    Buth, G.; Huttel, E.; Mangold, S.; Steininger, R.; Batchelor, D.; Doyle, S.; Simon, R.

    2013-03-01

    Different methods have been proposed to calculate the vertical position of the photon beam centroid from the four blade currents of staggered pair X-ray beam position monitors (XBPMs) at bending magnet beamlines since they emerged about 15 years ago. The original difference-over-sum method introduced by Peatman and Holldack is still widely used, even though it has been proven to be rather inaccurate at large beam displacements. By systematically generating bumps in the electron orbit of the ANKA storage ring and comparing synchronized data from electron BPMs and XBPM blade currents, we have been able to show that the log-ratio method by S. F. Lin, B.G. Sun et al. is superior (meaning the characteristic being closer to linear) to the ratio method, which in turn is superior to the difference over sum method. These findings are supported by simulations of the XBPM response to changes of the beam centroid. The heuristic basis for each of the methods is investigated. The implications on using XBPM readings for orbit correction are discussed

  11. Method of transient identification based on a possibilistic approach, optimized by genetic algorithm

    International Nuclear Information System (INIS)

    Almeida, Jose Carlos Soares de

    2001-02-01

    This work develops a method for transient identification based on a possible approach, optimized by Genetic Algorithm to optimize the number of the centroids of the classes that represent the transients. The basic idea of the proposed method is to optimize the partition of the search space, generating subsets in the classes within a partition, defined as subclasses, whose centroids are able to distinguish the classes with the maximum correct classifications. The interpretation of the subclasses as fuzzy sets and the possible approach provided a heuristic to establish influence zones of the centroids, allowing to achieve the 'don't know' answer for unknown transients, that is, outside the training set. (author)

  12. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    Science.gov (United States)

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  13. Bayesian ISOLA: new tool for automated centroid moment tensor inversion

    Science.gov (United States)

    Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John

    2017-04-01

    Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in

  14. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.

    Science.gov (United States)

    Berke, Ethan M; Shi, Xun

    2009-04-29

    Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.

  15. Detection of a surface breaking crack by using the centroid variations of laser ultrasonic spectrums

    International Nuclear Information System (INIS)

    Park, Seung Kyu; Baik, Sung Hoon; Lim, Chang Hwan; Joo, Young Sang; Jung, Hyun Kyu; Cha, Hyung Ki; Kang, Young June

    2006-01-01

    A laser ultrasonic system is a non-contact inspection device with a wide-band spectrum and a high spatial resolution. It provides absolute measurements of the moving distance and it can be applied to hard-to-access locations including curved or rough surfaces like in a nuclear power plant. In this paper, we have investigated the detection methods of the depth of a surface-breaking crack by using the surface wave of a laser ultrasound. The filtering function of a surface-breaking crack is a kind of a low-pass filter. The higher frequency components are more highly decreased in proportion to the crack depth. Also, the center frequency value of each ultrasound spectrum is decreased in proportion to the crack depth. We extracted the depth information of a surface-breaking crack by observing the centroid variation of the frequency spectrum. We describe the experimental results to detect the crack depth information by using the peak-to-valley values in the time domain and the center frequency values in the frequency domain.

  16. Modified enthalpy method for the simulation of melting and ...

    Indian Academy of Sciences (India)

    These include the implicit time stepping method of Voller & Cross. (1981), explicit enthalpy method of Tacke (1985), centroidal temperature correction method ... In variable viscosity method, viscosity is written as a function of liquid fraction.

  17. Ce3+ 5d-centroid shift and vacuum referred 4f-electron binding energies of all lanthanide impurities in 150 different compounds

    International Nuclear Information System (INIS)

    Dorenbos, Pieter

    2013-01-01

    A review on the wavelengths of all five 4f–5d transitions for Ce 3+ in about 150 different inorganic compounds (fluorides, chlorides, bromides, iodides, oxides, sulfides, selenides, nitrides) is presented. It provides data on the centroid shift and the crystal field splitting of the 5d-configuration which are then used to estimate the Eu 2+ inter 4f-electron Coulomb repulsion energy U(6,A) in compound A. The four semi-empirical models (the redshift model, the centroid shift model, the charge transfer model, and the chemical shift model) on lanthanide levels that were developed past 12 years are briefly reviewed. It will be demonstrated how those models together with the collected data of this work and elsewhere can be united to construct schemes that contain the binding energy of electrons in the 4f and 5d states for each divalent and each trivalent lanthanide ion relative to the vacuum energy. As example the vacuum referred binding energy schemes for LaF 3 and La 2 O 3 will be constructed. - Highlights: ► An compilation on all five Ce 3+ 4f–5d energies in 150 inorganic compounds is presented. ► The relationship between the 5d centroid shift and host cation electronegativity id demonstrated. ► The electronic structure scheme of the lanthanides in La 2 O 3 and LaF 3 is presented.

  18. Two methods to estimate the position resolution for straw chambers with strip readout

    International Nuclear Information System (INIS)

    Golutvin, I.A.; Movchan, S.A.; Peshekhonov, V.D.; Preda, T.

    1992-01-01

    The centroid and charge-ratio methods are presented to estimate the position resolution of the straw chambers with strip readout. For the straw chambers of 10 mm in diameter, the highest position resolution was obtained for a strip pitch of 5 mm. With the centroid method and perpendicular X-ray beam, the position resolution was ≅120 μm, for the signal-to-noise ratio of 60-65. The charge-ratio method has demonstrated ≅10% better position resolution at the edges of the strip. 6 refs.; 5 figs

  19. Weierstrass method for quaternionic polynomial root-finding

    Science.gov (United States)

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana

    2018-01-01

    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  20. Acquisition and Initial Analysis of H+- and H--Beam Centroid Jitter at LANSCE

    Science.gov (United States)

    Gilpatrick, J. D.; Bitteker, L.; Gulley, M. S.; Kerstiens, D.; Oothoudt, M.; Pillai, C.; Power, J.; Shelley, F.

    2006-11-01

    During the 2005 Los Alamos Neutron Science Center (LANSCE) beam runs, beam current and centroid-jitter data were observed, acquired, analyzed, and documented for both the LANSCE H+ and H- beams. These data were acquired using three beam position monitors (BPMs) from the 100-MeV Isotope Production Facility (IPF) beam line and three BPMs from the Switchyard transport line at the end of the LANSCE 800-MeV linac. The two types of data acquired, intermacropulse and intramacropulse, were analyzed for statistical and frequency characteristics as well as various other correlations including comparing their phase-space like characteristics in a coordinate system of transverse angle versus transverse position. This paper will briefly describe the measurements required to acquire these data, the initial analysis of these jitter data, and some interesting dilemmas these data presented.

  1. Collective centroid oscillations as an emittance preservation diagnostic in linear collider linacs

    International Nuclear Information System (INIS)

    Adolphsen, C.E.; Bane, K.L.F.; Spence, W.L.; Woodley, M.D.

    1997-08-01

    Transverse bunch centroid oscillations, induced at operating beam currents at which transverse wakefields are substantial, and observed at Beam Position Monitors, are sensitive to the actual magnetic focusing, energy gain, and rf phase profiles in a linac, and are insensitive to misalignments and jitter sources. In the pulse stealing set-up implemented at the SLC, they thus allow the frequent monitoring of the stability of the in-place emittance growth inhibiting or mitigating measures--primarily the energy scaled magnetic lattice and the rf phases necessary for BNS damping--independent of the actual emittance growth as driven by misalignments and jitter. The authors have developed a physically based analysis technique to meaningfully reduce the data. Oscillation beta-beating is a primary indicator of beam energy errors; shifts in the invariant amplitude reflect differential internal motion along the longitudinally extended bunch and thus are a sensitive indicator of the real rf phases in the machine; shifts in betatron phase advance contain corroborative information sensitive to both effects

  2. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    International Nuclear Information System (INIS)

    Wang, Chun-xi

    2003-01-01

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  3. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chun-xi

    2003-04-21

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  4. Simulation of plume rise: Study the effect of stably stratified turbulence layer on the rise of a buoyant plume from a continuous source by observing the plume centroid

    Science.gov (United States)

    Bhimireddy, Sudheer Reddy; Bhaganagar, Kiran

    2016-11-01

    Buoyant plumes are common in atmosphere when there exists a difference in temperature or density between the source and its ambience. In a stratified environment, plume rise happens until the buoyancy variation exists between the plume and ambience. In a calm no wind ambience, this plume rise is purely vertical and the entrainment happens because of the relative motion of the plume with ambience and also ambient turbulence. In this study, a plume centroid is defined as the plume mass center and is calculated from the kinematic equation which relates the rate of change of centroids position to the plume rise velocity. Parameters needed to describe the plume are considered as the plume radius, plumes vertical velocity and local buoyancy of the plume. The plume rise velocity is calculated by the mass, momentum and heat conservation equations in their differential form. Our study focuses on the entrainment velocity, as it depicts the extent of plume growth. This entrainment velocity is made up as sum of fractions of plume's relative velocity and ambient turbulence. From the results, we studied the effect of turbulence on the plume growth by observing the variation in the plume radius at different heights and the centroid height reached before loosing its buoyancy.

  5. Bayesian inference and interpretation of centroid moment tensors of the 2016 Kumamoto earthquake sequence, Kyushu, Japan

    Science.gov (United States)

    Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František

    2017-09-01

    On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.

  6. Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.

    Science.gov (United States)

    Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L

    2013-06-01

    Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.

  7. Power centroid radar and its rise from the universal cybernetics duality

    Science.gov (United States)

    Feria, Erlan H.

    2014-05-01

    Power centroid radar (PC-Radar) is a fast and powerful adaptive radar scheme that naturally surfaced from the recent discovery of the time-dual for information theory which has been named "latency theory." Latency theory itself was born from the universal cybernetics duality (UC-Duality), first identified in the late 1970s, that has also delivered a time dual for thermodynamics that has been named "lingerdynamics" and anchors an emerging lifespan theory for biological systems. In this paper the rise of PC-Radar from the UC-Duality is described. The development of PC-Radar, US patented, started with Defense Advanced Research Projects Agency (DARPA) funded research on knowledge-aided (KA) adaptive radar of the last decade. The outstanding signal to interference plus noise ratio (SINR) performance of PC-Radar under severely taxing environmental disturbances will be established. More specifically, it will be seen that the SINR performance of PC-Radar, either KA or knowledgeunaided (KU), approximates that of an optimum KA radar scheme. The explanation for this remarkable result is that PC-Radar inherently arises from the UC-Duality, which advances a "first principles" duality guidance theory for the derivation of synergistic storage-space/computational-time compression solutions. Real-world synthetic aperture radar (SAR) images will be used as prior-knowledge to illustrate these results.

  8. The impact of the in-orbit background and the X-ray source intensity on the centroiding accuracy of the Swift X-ray telescope

    CERN Document Server

    Ambrosi, R M; Hill, J; Cheruvu, C; Abbey, A F; Short, A D T

    2002-01-01

    The optical components of the Swift Gamma Ray Burst Explorer X-ray Telescope (XRT), consisting of the JET-X spare flight mirror and a charge coupled device of the type used in the EPIC program, were used in a re-calibration study carried out at the Panter facility, which is part of the Max Planck Institute for Extraterrestrial Physics. The objective of this study was to check the focal length and the off axis performance of the mirrors and to show that the half energy width (HEW) of the on-axis point spread function (PSF) was of the order of 16 arcsec at 1.5 keV (Nucl. Instr. and Meth. A 488 (2002) 543; SPIE 4140 (2000) 64) and that a centroiding accuracy better that 1 arcsec could be achieved within the 4 arcmin sampling area designated by the Burst Alert Telescope (Nucl. Instr. and Meth. A 488 (2002) 543). The centroiding accuracy of the Swift XRT's optical components was tested as a function of distance from the focus and off axis position of the PSF (Nucl. Instr. and Meth. A 488 (2002) 543). The presence ...

  9. Path integral centroid molecular dynamics simulations of semiinfinite slab and bulk liquid of para-hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Kinugawa, Kenichi [Nara Women`s Univ., Nara (Japan). Dept. of Chemistry

    1998-10-01

    It has been unsuccessful to solve a set of time-dependent Schroedinger equations numerically for many-body quantum systems which involve, e.g., a number of hydrogen molecules, protons, and excess electrons at a low temperature, where quantum effect evidently appears. This undesirable situation is fatal for the investigation of real low-temperature chemical systems because they are essentially composed of many quantum degrees of freedom. However, if we use a new technique called `path integral centroid molecular dynamics (CMD) simulation` proposed by Cao and Voth in 1994, the real-time semi-classical dynamics of many degrees of freedom can be computed by utilizing the techniques already developed in the traditional classical molecular dynamics (MD) simulations. Therefore, the CMD simulation is expected to be very powerful tool for the quantum dynamics studies or real substances. (J.P.N.)

  10. Finding-equal regression method and its application in predication of U resources

    International Nuclear Information System (INIS)

    Cao Huimo

    1995-03-01

    The commonly adopted deposit model method in mineral resources predication has two main part: one is model data that show up geological mineralization law for deposit, the other is statistics predication method that accords with characters of the data namely pretty regression method. This kind of regression method may be called finding-equal regression, which is made of the linear regression and distribution finding-equal method. Because distribution finding-equal method is a data pretreatment which accords with advanced mathematical precondition for the linear regression namely equal distribution theory, and this kind of data pretreatment is possible of realization. Therefore finding-equal regression not only can overcome nonlinear limitations, that are commonly occurred in traditional linear regression or other regression and always have no solution, but also can distinguish outliers and eliminate its weak influence, which would usually appeared when Robust regression possesses outlier in independent variables. Thus this newly finding-equal regression stands the best status in all kind of regression methods. Finally, two good examples of U resource quantitative predication are provided

  11. Alternative Polyadenylation: Methods, Findings, and Impacts

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2017-10-01

    Full Text Available Alternative polyadenylation (APA, a phenomenon that RNA molecules with different 3′ ends originate from distinct polyadenylation sites of a single gene, is emerging as a mechanism widely used to regulate gene expression. In the present review, we first summarized various methods prevalently adopted in APA study, mainly focused on the next-generation sequencing (NGS-based techniques specially designed for APA identification, the related bioinformatics methods, and the strategies for APA study in single cells. Then we summarized the main findings and advances so far based on these methods, including the preferences of alternative polyA (pA site, the biological processes involved, and the corresponding consequences. We especially categorized the APA changes discovered so far and discussed their potential functions under given conditions, along with the possible underlying molecular mechanisms. With more in-depth studies on extensive samples, more signatures and functions of APA will be revealed, and its diverse roles will gradually heave in sight. Keywords: Alternative polyadenylation, Next-generation sequencing, 3′UTR, Alternative splicing, Gene regulation

  12. Neutron radiography with sub-15 {mu}m resolution through event centroiding

    Energy Technology Data Exchange (ETDEWEB)

    Tremsin, Anton S., E-mail: ast@ssl.berkeley.edu [Space Sciences Laboratory, University of California at Berkeley, Berkeley, CA 94720 (United States); McPhate, Jason B.; Vallerga, John V.; Siegmund, Oswald H.W. [Space Sciences Laboratory, University of California at Berkeley, Berkeley, CA 94720 (United States); Bruce Feller, W. [NOVA Scientific, Inc. 10 Picker Road, Sturbridge, MA 01566 (United States); Lehmann, Eberhard; Kaestner, Anders; Boillat, Pierre; Panzner, Tobias; Filges, Uwe [Spallation Neutron Source Division, Paul Scherrer Institute, CH-5232 Villigen (Switzerland)

    2012-10-01

    Conversion of thermal and cold neutrons into a strong {approx}1 ns electron pulse with an absolute neutron detection efficiency as high as 50-70% makes detectors with {sup 10}B-doped Microchannel Plates (MCPs) very attractive for neutron radiography and microtomography applications. The subsequent signal amplification preserves the location of the event within the MCP pore (typically 6-10 {mu}m in diameter), providing the possibility to perform neutron counting with high spatial resolution. Different event centroiding techniques of the charge landing on a patterned anode enable accurate reconstruction of the neutron position, provided the charge footprints do not overlap within the time required for event processing. The new fast 2 Multiplication-Sign 2 Timepix readout with >1.2 kHz frame rates provides the unique possibility to detect neutrons with sub-15 {mu}m resolution at several MHz/cm{sup 2} counting rates. The results of high resolution neutron radiography experiments presented in this paper, demonstrate the sub-15 {mu}m resolution capability of our detection system. The high degree of collimation and cold spectrum of ICON and BOA beamlines combined with the high spatial resolution and detection efficiency of MCP-Timepix detectors are crucial for high contrast neutron radiography and microtomography with high spatial resolution. The next generation of Timepix electronics with sparsified readout should enable counting rates in excess of 10{sup 7} n/cm{sup 2}/s taking full advantage of high beam intensity of present brightest neutron imaging facilities.

  13. METHODS OF FINDING BUSINESS PARTNERS OF MANUFACTURING FIRMS IN JAPAN

    Directory of Open Access Journals (Sweden)

    Nobuhiro Takahashi

    2017-09-01

    Full Text Available This paper addresses new methods of finding business partners for joint development in Japan. These methods create opportunities for a manufacturing firm, which has excellent technology, to find an appropriate business partner. We call these methods the Osaka model. In the model, a company finds a business partner among plenty of companies, while they establish mutual trust and solve higher levels of technological difficulties. The model shares strong points of both spot transaction and conventional co-development. In other words, this model takes advantages of both Western style and Japanese style inter-business relationship. This paper also argues the environment for encouraging the model. The area should be a place where you can obtain many sources of information. Increasing meetings or facilities on specific themes is an effective method to encourage the model in the area. In addition, a database with which each technological seed has been already arranged with their technological needs is beneficial for matching them. If such a database becomes a shared knowledge in the area, the model would be promoted.

  14. Observational Evidence for the Effect of Amplification Bias in Gravitational Microlensing Experiments

    Science.gov (United States)

    Han, Cheongho; Jeong, Youngjin; Kim, Ho-Il

    1998-11-01

    Recently Alard, Mao, & Guibert and Alard proposed to detect the shift of a star's image centroid, δx, as a method to identify the lensed source among blended stars. Goldberg & Woźniak actually applied this method to the OGLE-1 database and found that seven of 15 events showed significant centroid shifts of δx >~ 0.2". The amount of centroid shift has been estimated theoretically by Goldberg; however, he treated the problem in general and did not apply it to a particular survey or field and therefore based his estimate on simple toy model luminosity functions (i.e., power laws). In this paper, we construct the expected distribution of δx for Galactic bulge events based on the precise stellar luminosity function observed by Holtzman et al. using the Hubble Space Telescope. Their luminosity function is complete up to MI ~ 9.0 (MV ~ 12), which corresponds to faint M-type stars. In our analysis we find that regular blending cannot produce a large fraction of events with measurable centroid shifts. By contrast, a significant fraction of events would have measurable centroid shifts if they are affected by amplification-bias blending. Therefore, the measurements of large centroid shifts for an important fraction of microlensing events of Goldberg & Woźniak confirm the prediction of Han & Alard that a large fraction of Galactic bulge events are affected by amplification-bias blending.

  15. Investigation on method of estimating the excitation spectrum of vibration source

    International Nuclear Information System (INIS)

    Zhang Kun; Sun Lei; Lin Song

    2010-01-01

    In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)

  16. A novel method of spectrum stabilization

    International Nuclear Information System (INIS)

    Sidhu, N.P.S.

    1978-01-01

    A new type of spectrum stabilizer for a scintillation spectrometer is described. A pulse light source DM 160 is used to introduce an artificial peak in the spectrum at a convenient energy. The centroid of pulse spectrum corresponding to artificial peak is compared with that of suitable reference pulses obtained from the DM 160 driver circuit. Any drift in artificial peak produces a d.c. voltage at the output of centroid comparator and this voltage is used to control the gain of variable gain amplifier to counter the drift. With suitable adjustment the effect of any variation in pulse height of DM 160 driving pulse can be compensated so that the spectrometer gain is independent of any variation, drift etc. in the height of pulse driving DM 160 tube. This circuit is simple and gives improved performance compared to 2 channel method of obtaining the control voltage for variable gain amplifier. (author)

  17. A new method for finding vacua in string phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Gray, James [Institut d' Astrophysique de Paris and APC, Universite de Paris 7, 98 bis, Bd. Arago 75014, Paris (France); He, Yang-Hui [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom)]|[Merton College, Oxford, OX1 4JD and Mathematical Institute, Oxford University, Oxford (United Kingdom); Ilderton, Anton [School of Mathematics and Statistics, University of Plymouth, Drake Circus, Plymouth PL4 8AA (United Kingdom); Lukas, Andre [Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom)

    2007-05-15

    One of the central problems of string-phenomenology is to find stable vacua in the four dimensional effective theories which result from compactification. We present an algorithmic method to find all of the vacua of any given string-phenomenological system in a huge class. In particular, this paper reviews and then extends hep-th/0606122 to include various nonperturbative effects. These include gaugino condensation and instantonic contributions to the superpotential. (authors)

  18. A comparison of methods for calculating population exposure estimates of daily weather for health research

    Directory of Open Access Journals (Sweden)

    Dear Keith BG

    2006-09-01

    appropriate method conceptually is the use of weather data from sites within 50 kilometres radius of the area weighted to population centres, but a simpler acceptable option is to weight to the geographic centroid.

  19. Component optimization of dairy manure vermicompost, straw, and peat in seedling compressed substrates using simplex-centroid design.

    Science.gov (United States)

    Yang, Longyuan; Cao, Hongliang; Yuan, Qiaoxia; Luoa, Shuai; Liu, Zhigang

    2018-03-01

    Vermicomposting is a promising method to disposal dairy manures, and the dairy manure vermicompost (DMV) to replace expensive peat is of high value in the application of seedling compressed substrates. In this research, three main components: DMV, straw, and peat, are conducted in the compressed substrates, and the effect of individual components and the corresponding optimal ratio for the seedling production are significant. To address these issues, the simplex-centroid experimental mixture design is employed, and the cucumber seedling experiment is conducted to evaluate the compressed substrates. Results demonstrated that the mechanical strength and physicochemical properties of compressed substrates for cucumber seedling can be well satisfied with suitable mixture ratio of the components. Moreover, DMV, straw, and peat) could be determined at 0.5917:0.1608:0.2475 when the weight coefficients of the three parameters (shoot length, root dry weight, and aboveground dry weight) were 1:1:1. For different purpose, the optimum ratio can be little changed on the basis of different weight coefficients. Compressed substrate is lump and has certain mechanical strength, produced by application of mechanical pressure to the seedling substrates. It will not harm seedlings when bedding out the seedlings, since the compressed substrate and seedling are bedded out together. However, there is no one using the vermicompost and agricultural waste components of compressed substrate for vegetable seedling production before. Thus, it is important to understand the effect of individual components to seedling production, and to determine the optimal ratio of components.

  20. Novel Method of Detecting Movement of the Interference Fringes Using One-Dimensional PSD

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2015-06-01

    Full Text Available In this paper, a method of using a one-dimensional position-sensitive detector (PSD by replacing charge-coupled device (CCD to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe’s phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.

  1. Finding protein sites using machine learning methods

    Directory of Open Access Journals (Sweden)

    Jaime Leonardo Bobadilla Molina

    2003-07-01

    Full Text Available The increasing amount of protein three-dimensional (3D structures determined by x-ray and NMR technologies as well as structures predicted by computational methods results in the need for automated methods to provide inital annotations. We have developed a new method for recognizing sites in three-dimensional protein structures. Our method is based on a previosly reported algorithm for creating descriptions of protein microenviroments using physical and chemical properties at multiple levels of detail. The recognition method takes three inputs: 1. A set of control nonsites that share some structural or functional role. 2. A set of control nonsites that lack this role. 3. A single query site. A support vector machine classifier is built using feature vectors where each component represents a property in a given volume. Validation against an independent test set shows that this recognition approach has high sensitivity and specificity. We also describe the results of scanning four calcium binding proteins (with the calcium removed using a three dimensional grid of probe points at 1.25 angstrom spacing. The system finds the sites in the proteins giving points at or near the blinding sites. Our results show that property based descriptions along with support vector machines can be used for recognizing protein sites in unannotated structures.

  2. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    Science.gov (United States)

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  3. Decadal Western Pacific Warm Pool Variability: A Centroid and Heat Content Study.

    Science.gov (United States)

    Kidwell, Autumn; Han, Lu; Jo, Young-Heon; Yan, Xiao-Hai

    2017-10-13

    We examine several characteristics of the Western Pacific Warm Pool (WP) in the past thirty years of mixed interannual variability and climate change. Our study presents the three-dimensional WP centroid (WPC) movement, WP heat content anomaly (HC) and WP volume (WPV) on interannual to decadal time scales. We show the statistically significant correlation between each parameter's interannual anomaly and the NINO 3, NINO 3.4, NINO 4, SOI, and PDO indices. The longitudinal component of the WPC is most strongly correlated with NINO 4 (R = 0.78). The depth component of the WPC has the highest correlation (R = -0.6) with NINO3.4. The WPV and NINO4 have an R-Value of -0.65. HC has the highest correlation with NINO3.4 (R = -0.52). During the study period of 1982-2014, the non-linear trends, derived from ensemble empirical mode decomposition (EEMD), show that the WPV, WP depth and HC have all increased. The WPV has increased by 14% since 1982 and the HC has increased from -1 × 10 8  J/m 2 in 1993 to 10 × 10 8  J/m 2 in 2014. While the largest variances in the latitudinal and longitudinal WPC locations are associated with annual and seasonal timescales, the largest variances in the WPV and HC are due to the multi-decadal non-linear trend.

  4. Methods for Synthesizing Findings on Moderation Effects Across Multiple Randomized Trials

    Science.gov (United States)

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2011-01-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis, and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design. PMID:21360061

  5. The Global Optimal Algorithm of Reliable Path Finding Problem Based on Backtracking Method

    Directory of Open Access Journals (Sweden)

    Liang Shen

    2017-01-01

    Full Text Available There is a growing interest in finding a global optimal path in transportation networks particularly when the network suffers from unexpected disturbance. This paper studies the problem of finding a global optimal path to guarantee a given probability of arriving on time in a network with uncertainty, in which the travel time is stochastic instead of deterministic. Traditional path finding methods based on least expected travel time cannot capture the network user’s risk-taking behaviors in path finding. To overcome such limitation, the reliable path finding algorithms have been proposed but the convergence of global optimum is seldom addressed in the literature. This paper integrates the K-shortest path algorithm into Backtracking method to propose a new path finding algorithm under uncertainty. The global optimum of the proposed method can be guaranteed. Numerical examples are conducted to demonstrate the correctness and efficiency of the proposed algorithm.

  6. A Local Weighted Nearest Neighbor Algorithm and a Weighted and Constrained Least-Squared Method for Mixed Odor Analysis by Electronic Nose Systems

    Directory of Open Access Journals (Sweden)

    Jyuo-Min Shyu

    2010-11-01

    Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.

  7. General method to find the attractors of discrete dynamic models of biological systems

    Science.gov (United States)

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  8. General method to find the attractors of discrete dynamic models of biological systems.

    Science.gov (United States)

    Gan, Xiao; Albert, Réka

    2018-04-01

    Analyzing the long-term behaviors (attractors) of dynamic models of biological networks can provide valuable insight. We propose a general method that can find the attractors of multilevel discrete dynamical systems by extending a method that finds the attractors of a Boolean network model. The previous method is based on finding stable motifs, subgraphs whose nodes' states can stabilize on their own. We extend the framework from binary states to any finite discrete levels by creating a virtual node for each level of a multilevel node, and describing each virtual node with a quasi-Boolean function. We then create an expanded representation of the multilevel network, find multilevel stable motifs and oscillating motifs, and identify attractors by successive network reduction. In this way, we find both fixed point attractors and complex attractors. We implemented an algorithm, which we test and validate on representative synthetic networks and on published multilevel models of biological networks. Despite its primary motivation to analyze biological networks, our motif-based method is general and can be applied to any finite discrete dynamical system.

  9. A method to determine the detector locations of the cone-beam projection of the balls’ centers

    International Nuclear Information System (INIS)

    Deng, Lin; Xi, Xiaoqi; Li, Lei; Han, Yu; Yan, Bin

    2015-01-01

    In geometric calibration of cone-beam computed tomography (CBCT), sphere-like objects such as balls are widely imaged, the positioning information of which is obtained to determine the unknown geometric parameters. In this process, the accuracy of the detector location of CB projection of the center of the ball, which we call the center projection, is very important, since geometric calibration is sensitive to errors in the positioning information. Currently in almost all the geometric calibration using balls, the center projection is invariably estimated by the center of the support of the projection or the centroid of the intensity values inside the support approximately. Clackdoyle’s work indicates that the center projection is not always at the center of the support or the centroid of the intensity values inside, and has given a quantitative analysis of the maximum errors in evaluating the center projection by the centroid. In this paper, an exact method is proposed to calculate the center projection, utilizing both the detector location of the ellipse center and the two axis lengths of the ellipse. Numerical simulation results have demonstrated the precision and the robustness of the proposed method. Finally there are some comments on this work with non-uniform density balls, as well as the effect by the error occurred in the evaluation for the location of the orthogonal projection of the cone vertex onto the detector. (paper)

  10. SU-F-J-142: Proposed Method to Broaden Inclusion Potential of Patients Able to Use the Calypso Tracking System in Prostate Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Fiedler, D; Kuo, H; Bodner, W; Tome, W [Montefiore Medical Center, Bronx, NY (United States)

    2016-06-15

    Purpose: To introduce a non-standard method of patient setup, using BellyBoard immobilization, to better utilize the localization and tracking potential of an RF-beacon system with EBRT for prostate cancer. Methods: An RF-beacon phantom was imaged using a wide bore CT scanner, both in a standard level position and with a known rotation (4° pitch and 7.5° yaw). A commercial treatment planning system (TPS) was used to determine positional coordinates of each beacon, and the centroid of the three beacons for both setups. For each setup at the Linac, kV AP and Rt Lateral images were obtained. A full characterization of the RF-beacon system in clinical mode was completed for various beacons’ array-to-centroid distances, which includes vertical, lateral, and longitudinal offset data, as well as pitch and yaw offset measurements for the tilted phantom. For the single patient who has been setup using the proposed BellyBoard method, a supine simulation was first obtained. When abdominal protrusion was found to be exceeding the limits of the RF-Beacon system through distance-based analysis in the TPS, the patient is re-simulated prone with the BellyBoard. Array to centroid distance is measured again in the TPS, and if found to be within the localization or tracking region it is applied. Results: Characterization of limitations for the RF-beacon system in clinical mode showed acceptable consistency of offset determination for phantom setup accuracy. The nonstandard patient setup method reduced the beacons’ centroid-to-array distance by 8.32cm, from 25.13cm to 16.81cm; completely out of tracking range (greater than 20cm) to within setup tracking range (less than 20cm). Conclusion: Using the RF-beacon system in combination with this novel patient setup can allow patients who would otherwise not be candidates for beacon enhanced EBRT to now be able to benefit from the reduced PTV margins of this treatment method.

  11. SU-F-J-142: Proposed Method to Broaden Inclusion Potential of Patients Able to Use the Calypso Tracking System in Prostate Radiotherapy

    International Nuclear Information System (INIS)

    Fiedler, D; Kuo, H; Bodner, W; Tome, W

    2016-01-01

    Purpose: To introduce a non-standard method of patient setup, using BellyBoard immobilization, to better utilize the localization and tracking potential of an RF-beacon system with EBRT for prostate cancer. Methods: An RF-beacon phantom was imaged using a wide bore CT scanner, both in a standard level position and with a known rotation (4° pitch and 7.5° yaw). A commercial treatment planning system (TPS) was used to determine positional coordinates of each beacon, and the centroid of the three beacons for both setups. For each setup at the Linac, kV AP and Rt Lateral images were obtained. A full characterization of the RF-beacon system in clinical mode was completed for various beacons’ array-to-centroid distances, which includes vertical, lateral, and longitudinal offset data, as well as pitch and yaw offset measurements for the tilted phantom. For the single patient who has been setup using the proposed BellyBoard method, a supine simulation was first obtained. When abdominal protrusion was found to be exceeding the limits of the RF-Beacon system through distance-based analysis in the TPS, the patient is re-simulated prone with the BellyBoard. Array to centroid distance is measured again in the TPS, and if found to be within the localization or tracking region it is applied. Results: Characterization of limitations for the RF-beacon system in clinical mode showed acceptable consistency of offset determination for phantom setup accuracy. The nonstandard patient setup method reduced the beacons’ centroid-to-array distance by 8.32cm, from 25.13cm to 16.81cm; completely out of tracking range (greater than 20cm) to within setup tracking range (less than 20cm). Conclusion: Using the RF-beacon system in combination with this novel patient setup can allow patients who would otherwise not be candidates for beacon enhanced EBRT to now be able to benefit from the reduced PTV margins of this treatment method.

  12. Seismicity in the block mountains between Halle and Leipzig, Central Germany: centroid moment tensors, ground motion simulation, and felt intensities of two M ≈ 3 earthquakes in 2015 and 2017

    Science.gov (United States)

    Dahm, Torsten; Heimann, Sebastian; Funke, Sigward; Wendt, Siegfried; Rappsilber, Ivo; Bindi, Dino; Plenefisch, Thomas; Cotton, Fabrice

    2018-05-01

    On April 29, 2017 at 0:56 UTC (2:56 local time), an M W = 2.8 earthquake struck the metropolitan area between Leipzig and Halle, Germany, near the small town of Markranstädt. The earthquake was felt within 50 km from the epicenter and reached a local intensity of I 0 = IV. Already in 2015 and only 15 km northwest of the epicenter, a M W = 3.2 earthquake struck the area with a similar large felt radius and I 0 = IV. More than 1.1 million people live in the region, and the unusual occurrence of the two earthquakes led to public attention, because the tectonic activity is unclear and induced earthquakes have occurred in neighboring regions. Historical earthquakes south of Leipzig had estimated magnitudes up to M W ≈ 5 and coincide with NW-SE striking crustal basement faults. We use different seismological methods to analyze the two recent earthquakes and discuss them in the context of the known tectonic structures and historical seismicity. Novel stochastic full waveform simulation and inversion approaches are adapted for the application to weak, local earthquakes, to analyze mechanisms and ground motions and their relation to observed intensities. We find NW-SE striking normal faulting mechanisms for both earthquakes and centroid depths of 26 and 29 km. The earthquakes are located where faults with large vertical offsets of several hundred meters and Hercynian strike have developed since the Mesozoic. We use a stochastic full waveform simulation to explain the local peak ground velocities and calibrate the method to simulate intensities. Since the area is densely populated and has sensitive infrastructure, we simulate scenarios assuming that a 12-km long fault segment between the two recent earthquakes is ruptured and study the impact of rupture parameters on ground motions and expected damage.

  13. Numerical simulation of the electrohydrodynamic effects on bubble rising using the SPH method

    International Nuclear Information System (INIS)

    Rahmat, A.; Tofighi, N.; Yildiz, M.

    2016-01-01

    Highlights: • An oil-water bubble rising system is simulated under the electrohydrodynamic effects using ISPH method. • The bubble aspect ratio increases by incrementing electrical capillary and Reynolds numbers, and decrementing the Bond number. • The centroid velocity increases with increments of electric capillary and Reynolds number. • Negative values of the bottom velocity are observed due to the pulling effect of the bottom boundary. • The distance between the bubble centroids decreases in vertically in-line bubble pairs. - Abstract: In this paper, numerical simulations of two dimensional bubble rising in the presence of electrohydrodynamic forces are presented. The physical properties of the bubble and the background fluid are adjusted to resemble an oil-water system. The numerical technique utilized to discretize the governing equations is the Lagrangian Incompressible Smoothed Particle Hydrodynamics (ISPH) method. A single bubble is subjected to an electric field using a leaky dielectric model under different values of Reynolds, Bond and electrical Capillary numbers. The results show that the bubble elongates in the direction of the electric field forming a prolate shape. The increase in the values of Reynolds and electrical Capillary numbers enhances prolate deformation of the bubble, but raising the Bond number reduces the prolateness of the bubble. The interaction of a bubble pair is also investigated for various configurations. If the bubbles are placed such that their centroids are vertically in-line, they tend to merge due to the initial prolate deformation. However, the bubbles do not merge for off center-oriented cases.

  14. Numerical form-finding method for large mesh reflectors with elastic rim trusses

    Science.gov (United States)

    Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli

    2018-06-01

    Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.

  15. A new method for ordering triangular fuzzy numbers

    Directory of Open Access Journals (Sweden)

    S.H. Nasseri

    2010-09-01

    Full Text Available Ranking fuzzy numbers plays a very important role in linguistic decision making and other fuzzy application systems. In spite of many ranking methods, no one can rank fuzzy numbers with human intuition consistently in all cases. Shortcoming are found in some of the convenient methods for ranking triangular fuzzy numbers such as the coefficient of variation (CV index, distance between fuzzy sets, centroid point and original point, and also weighted mean value. In this paper, we introduce a new method for ranking triangular fuzzy number to overcome the shortcomings of the previous techniques. Finally, we compare our method with some convenient methods for ranking fuzzy numbers to illustrate the advantage our method.

  16. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P.; Rother, Kristian M.; Bujnicki, Janusz M.

    2013-01-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks. PMID:23435231

  17. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction.

    Science.gov (United States)

    Puton, Tomasz; Kozlowski, Lukasz P; Rother, Kristian M; Bujnicki, Janusz M

    2013-04-01

    We present a continuous benchmarking approach for the assessment of RNA secondary structure prediction methods implemented in the CompaRNA web server. As of 3 October 2012, the performance of 28 single-sequence and 13 comparative methods has been evaluated on RNA sequences/structures released weekly by the Protein Data Bank. We also provide a static benchmark generated on RNA 2D structures derived from the RNAstrand database. Benchmarks on both data sets offer insight into the relative performance of RNA secondary structure prediction methods on RNAs of different size and with respect to different types of structure. According to our tests, on the average, the most accurate predictions obtained by a comparative approach are generated by CentroidAlifold, MXScarna, RNAalifold and TurboFold. On the average, the most accurate predictions obtained by single-sequence analyses are generated by CentroidFold, ContextFold and IPknot. The best comparative methods typically outperform the best single-sequence methods if an alignment of homologous RNA sequences is available. This article presents the results of our benchmarks as of 3 October 2012, whereas the rankings presented online are continuously updated. We will gladly include new prediction methods and new measures of accuracy in the new editions of CompaRNA benchmarks.

  18. Search methods that people use to find owners of lost pets.

    Science.gov (United States)

    Lord, Linda K; Wittum, Thomas E; Ferketich, Amy K; Funk, Julie A; Rajala-Schultz, Päivi J

    2007-06-15

    To characterize the process by which people who find lost pets search for the owners. Cross-sectional study. Sample Population-188 individuals who found a lost pet in Dayton, Ohio, between March 1 and June 30, 2006. Procedures-Potential participants were identified as a result of contact with a local animal agency or placement of an advertisement in the local newspaper. A telephone survey was conducted to identify methods participants used to find the pets' owners. 156 of 188 (83%) individuals completed the survey. Fifty-nine of the 156 (38%) pets were reunited with their owners; median time to reunification was 2 days (range, 0.5 to 45 days). Only 1 (3%) cat owner was found, compared with 58 (46%) dog owners. Pet owners were found as a result of information provided by an animal agency (25%), placement of a newspaper advertisement (24%), walking the neighborhood (19%), signs in the neighborhood (15%), information on a pet tag (10%), and other methods (7%). Most finders (87%) considered it extremely important to find the owner, yet only 13 (8%) initially surrendered the found pet to an animal agency. The primary reason people did not surrender found pets was fear of euthanasia (57%). Only 97 (62%) individuals were aware they could run a found-pet advertisement in the newspaper at no charge, and only 1 person who was unaware of the no-charge policy placed an advertisement. Veterinarians and shelters can help educate people who find lost pets about methods to search for the pets' owners.

  19. Tests of the methods of analysis of picosecond lifetimes and measurement of the half-life of the 569.6 keV level in 207Pb

    International Nuclear Information System (INIS)

    Lima, E. de; Kawakami, H.; Lima, A. de; Hichwa, R.; Ramayya, A.V.; Hamilton, J.H.; Dunn, W.; Kim, H.J.

    1978-01-01

    Customarily one extracts the half-life of the nuclear state from a delayed time spectrum by an analysis of the centroid shift, the slope and lately by the convolution method. Recently there have been two formulas relating the centroid shift to the half-life of the nuclear state. These two procedures can give different results for the half-life when Tsub(1/2) the same order or less than the time width of one channel. An extensive investigation of these two formulas and precedures has been made by measuring the half-life of the first excited state in 207 Pb at 569.6 keV. This analysis confirms Bay's formula relating the centroid shift to the half-life of the state. The half-life of the 569.6 keV level in 207 Pb is measured to be (129+-3) ps in excellent agreement with Weisskopf's single particle estimate of 128 ps for an E2 transition. (Auth.)

  20. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Science.gov (United States)

    Raknes, Guttorm; Hunskaar, Steinar

    2014-01-01

    We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  1. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Directory of Open Access Journals (Sweden)

    Guttorm Raknes

    Full Text Available We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  2. Strength Reduction Method for Stability Analysis of Local Discontinuous Rock Mass with Iterative Method of Partitioned Finite Element and Interface Boundary Element

    Directory of Open Access Journals (Sweden)

    Tongchun Li

    2015-01-01

    element is proposed to solve the safety factor of local discontinuous rock mass. Slope system is divided into several continuous bodies and local discontinuous interface boundaries. Each block is treated as a partition of the system and contacted by discontinuous joints. The displacements of blocks are chosen as basic variables and the rigid displacements in the centroid of blocks are chosen as motion variables. The contact forces on interface boundaries and the rigid displacements to the centroid of each body are chosen as mixed variables and solved iteratively using the interface boundary equations. Flexibility matrix is formed through PFE according to the contact states of nodal pairs and spring flexibility is used to reflect the influence of weak structural plane so that nonlinear iteration is only limited to the possible contact region. With cohesion and friction coefficient reduced gradually, the states of all nodal pairs at the open or slip state for the first time are regarded as failure criterion, which can decrease the effect of subjectivity in determining safety factor. Examples are used to verify the validity of the proposed method.

  3. Parallel shooting methods for finding steady state solutions to engine simulation models

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2007-01-01

    Parallel single- and multiple shooting methods were tested for finding periodic steady state solutions to a Stirling engine model. The model was used to illustrate features of the methods and possibilities for optimisations. Performance was measured using simulation of an experimental data set...

  4. Fault Diagnosis of Rotating Machinery Based on the Multiscale Local Projection Method and Diagonal Slice Spectrum

    Directory of Open Access Journals (Sweden)

    Yong Lv

    2018-04-01

    Full Text Available The vibration signals of bearings and gears measured from rotating machinery usually have nonlinear, nonstationary characteristics. The local projection algorithm cannot only reduce the noise of the nonlinear system, but can also preserve the nonlinear deterministic structure of the signal. The influence of centroid selection on the performance of noise reduction methods is analyzed, and the multiscale local projection method of centroid was proposed in this paper. This method considers both the geometrical shape and statistical error of the signal in high dimensional phase space, which can effectively eliminate the noise and preserve the complete geometric structure of the attractors. The diagonal slice spectrum can identify the frequency components of quadratic phase coupling and enlarge the coupled frequency component in the nonlinear signal. Therefore, the proposed method based on the above two algorithms can achieve more accurate results of fault diagnosis of gears and rolling bearings. The simulated signal is used to verify its effectiveness in a numerical simulation. Then, the proposed method is conducted for fault diagnosis of gears and rolling bearings in application researches. The fault characteristics of faulty bearings and gears can be extracted successfully in the researches. The experimental results indicate the effectiveness of the novel proposed method.

  5. Quick regional centroid moment tensor solutions for the Emilia 2012 (northern Italy seismic sequence

    Directory of Open Access Journals (Sweden)

    Silvia Pondrelli

    2012-10-01

    Full Text Available In May 2012, a seismic sequence struck the Emilia region (northern Italy. The mainshock, of Ml 5.9, occurred on May 20, 2012, at 02:03 UTC. This was preceded by a smaller Ml 4.1 foreshock some hours before (23:13 UTC on May 19, 2012 and followed by more than 2,500 earthquakes in the magnitude range from Ml 0.7 to 5.2. In addition, on May 29, 2012, three further strong earthquakes occurred, all with magnitude Ml ≥5.2: a Ml 5.8 earthquake in the morning (07:00 UTC, followed by two events within just 5 min of each other, one at 10:55 UTC (Ml 5.3 and the second at 11:00 UTC (Ml 5.2. For all of the Ml ≥4.0 earthquakes in Italy and for all of the Ml ≥4.5 in the Mediterranean area, an automatic procedure for the computation of a regional centroid moment tensor (RCMT is triggered by an email alert. Within 1 h of the event, a manually revised quick RCMT (QRCMT can be published on the website if the solution is considered stable. In particular, for the Emilia seismic sequence, 13 QRCMTs were determined and for three of them, those with M >5.5, the automatically computed QRCMTs fitted the criteria for publication without manual revision. Using this seismic sequence as a test, we can then identify the magnitude threshold for automatic publication of our QRCMTs.

  6. Cellular Phone Towers, Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected to orthophotography. Probably includes towers other than cell towers (uncertain). Not published., Published in 2003, 1:1200 (1in=100ft) scale, Sedgwick County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Cellular Phone Towers dataset current as of 2003. Cell towers developed for Appraiser's Department in 2003. Location was based upon parcel centroids, and corrected...

  7. Finding all solutions of nonlinear equations using the dual simplex method

    Science.gov (United States)

    Yamamura, Kiyotaka; Fujioka, Tsuyoshi

    2003-03-01

    Recently, an efficient algorithm has been proposed for finding all solutions of systems of nonlinear equations using linear programming. This algorithm is based on a simple test (termed the LP test) for nonexistence of a solution to a system of nonlinear equations using the dual simplex method. In this letter, an improved version of the LP test algorithm is proposed. By numerical examples, it is shown that the proposed algorithm could find all solutions of a system of 300 nonlinear equations in practical computation time.

  8. A Mixed Method Research for Finding a Model of Administrative Decentralization

    OpenAIRE

    Tahereh Feizy; Alireza Moghali; Masuod Geramipoor; Reza Zare

    2015-01-01

    One of the critical issues of administrative decentralization in translating theory into practice is understanding its meaning. An important method to identify administrative decentralization is to address how it can be planned and implemented, and what are its implications, and how it would overcome challenges. The purpose of this study is finding a model for analyzing and evaluating administrative decentralization, so a mixed method research was used to explore and confirm the model of Admi...

  9. Accurate beacon positioning method for satellite-to-ground optical communication.

    Science.gov (United States)

    Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing

    2017-12-11

    In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.

  10. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  11. An efficient fully-implicit multislope MUSCL method for multiphase flow with gravity in discrete fractured media

    Science.gov (United States)

    Jiang, Jiamin; Younis, Rami M.

    2017-06-01

    The first-order methods commonly employed in reservoir simulation for computing the convective fluxes introduce excessive numerical diffusion leading to severe smoothing of displacement fronts. We present a fully-implicit cell-centered finite-volume (CCFV) framework that can achieve second-order spatial accuracy on smooth solutions, while at the same time maintain robustness and nonlinear convergence performance. A novel multislope MUSCL method is proposed to construct the required values at edge centroids in a straightforward and effective way by taking advantage of the triangular mesh geometry. In contrast to the monoslope methods in which a unique limited gradient is used, the multislope concept constructs specific scalar slopes for the interpolations on each edge of a given element. Through the edge centroids, the numerical diffusion caused by mesh skewness is reduced, and optimal second order accuracy can be achieved. Moreover, an improved smooth flux-limiter is introduced to ensure monotonicity on non-uniform meshes. The flux-limiter provides high accuracy without degrading nonlinear convergence performance. The CCFV framework is adapted to accommodate a lower-dimensional discrete fracture-matrix (DFM) model. Several numerical tests with discrete fractured system are carried out to demonstrate the efficiency and robustness of the numerical model.

  12. Reconciling incongruous qualitative and quantitative findings in mixed methods research: exemplars from research with drug using populations.

    Science.gov (United States)

    Wagner, Karla D; Davidson, Peter J; Pollini, Robin A; Strathdee, Steffanie A; Washburn, Rachel; Palinkas, Lawrence A

    2012-01-01

    Mixed methods research is increasingly being promoted in the health sciences as a way to gain more comprehensive understandings of how social processes and individual behaviours shape human health. Mixed methods research most commonly combines qualitative and quantitative data collection and analysis strategies. Often, integrating findings from multiple methods is assumed to confirm or validate the findings from one method with the findings from another, seeking convergence or agreement between methods. Cases in which findings from different methods are congruous are generally thought of as ideal, whilst conflicting findings may, at first glance, appear problematic. However, the latter situation provides the opportunity for a process through which apparently discordant results are reconciled, potentially leading to new emergent understandings of complex social phenomena. This paper presents three case studies drawn from the authors' research on HIV risk amongst injection drug users in which mixed methods studies yielded apparently discrepant results. We use these case studies (involving injection drug users [IDUs] using a Needle/Syringe Exchange Program in Los Angeles, CA, USA; IDUs seeking to purchase needle/syringes at pharmacies in Tijuana, Mexico; and young street-based IDUs in San Francisco, CA, USA) to identify challenges associated with integrating findings from mixed methods projects, summarize lessons learned, and make recommendations for how to more successfully anticipate and manage the integration of findings. Despite the challenges inherent in reconciling apparently conflicting findings from qualitative and quantitative approaches, in keeping with others who have argued in favour of integrating mixed methods findings, we contend that such an undertaking has the potential to yield benefits that emerge only through the struggle to reconcile discrepant results and may provide a sum that is greater than the individual qualitative and quantitative parts

  13. Automatic detection and quantitative analysis of cells in the mouse primary motor cortex

    Science.gov (United States)

    Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui

    2014-09-01

    Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.

  14. An automated method to find transition states using chemical dynamics simulations.

    Science.gov (United States)

    Martínez-Núñez, Emilio

    2015-02-05

    A procedure to automatically find the transition states (TSs) of a molecular system (MS) is proposed. It has two components: high-energy chemical dynamics simulations (CDS), and an algorithm that analyzes the geometries along the trajectories to find reactive pathways. Two levels of electronic structure calculations are involved: a low level (LL) is used to integrate the trajectories and also to optimize the TSs, and a higher level (HL) is used to reoptimize the structures. The method has been tested in three MSs: formaldehyde, formic acid (FA), and vinyl cyanide (VC), using MOPAC2012 and Gaussian09 to run the LL and HL calculations, respectively. Both the efficacy and efficiency of the method are very good, with around 15 TS structures optimized every 10 trajectories, which gives a total of 7, 12, and 83 TSs for formaldehyde, FA, and VC, respectively. The use of CDS makes it a powerful tool to unveil possible nonstatistical behavior of the system under study. © 2014 Wiley Periodicals, Inc.

  15. TOWARDS FINDING A NEW KERNELIZED FUZZY C-MEANS CLUSTERING ALGORITHM

    Directory of Open Access Journals (Sweden)

    Samarjit Das

    2014-04-01

    Full Text Available Kernelized Fuzzy C-Means clustering technique is an attempt to improve the performance of the conventional Fuzzy C-Means clustering technique. Recently this technique where a kernel-induced distance function is used as a similarity measure instead of a Euclidean distance which is used in the conventional Fuzzy C-Means clustering technique, has earned popularity among research community. Like the conventional Fuzzy C-Means clustering technique this technique also suffers from inconsistency in its performance due to the fact that here also the initial centroids are obtained based on the randomly initialized membership values of the objects. Our present work proposes a new method where we have applied the Subtractive clustering technique of Chiu as a preprocessor to Kernelized Fuzzy CMeans clustering technique. With this new method we have tried not only to remove the inconsistency of Kernelized Fuzzy C-Means clustering technique but also to deal with the situations where the number of clusters is not predetermined. We have also provided a comparison of our method with the Subtractive clustering technique of Chiu and Kernelized Fuzzy C-Means clustering technique using two validity measures namely Partition Coefficient and Clustering Entropy.

  16. Swarm: robust and fast clustering method for amplicon-based studies

    Science.gov (United States)

    Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506

  17. Swarm: robust and fast clustering method for amplicon-based studies

    Directory of Open Access Journals (Sweden)

    Frédéric Mahé

    2014-09-01

    Full Text Available Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.

  18. A Root-MUSIC-Like Direction Finding Method for Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    Yide Wang

    2005-01-01

    Full Text Available We propose a new root-MUSIC-like direction finding algorithm that exploits cyclostationarity in order to improve the direction-of-arrival estimation. The proposed cyclic method is signal selective, it allows to increase the resolution power and the noise robustness significantly, and it is also able to handle more sources than the number of sensors. Computer simulations are used to show the performance of the algorithm.

  19. Optimization of control parameters of a hot cold controller by means of Simplex type methods

    Science.gov (United States)

    Porte, C.; Caron-Poussin, M.; Carot, S.; Couriol, C.; Moreno, M. Martin; Delacroix, A.

    1997-01-01

    This paper describes a hot/cold controller for regulating crystallization operations. The system was identified with a common method (the Broida method) and the parameters were obtained by the Ziegler-Nichols method. The paper shows that this empirical method will only allow a qualitative approach to regulation and that, in some instances, the parameters obtained are unreliable and therefore cannot be used to cancel variations between the set point and the actual values. Optimization methods were used to determine the regulation parameters and solve this identcation problem. It was found that the weighted centroid method was the best one. PMID:18924791

  20. Kepler Planet Detection Metrics: Automatic Detection of Background Objects Using the Centroid Robovetter

    Science.gov (United States)

    Mullally, Fergal

    2017-01-01

    We present an automated method of identifying background eclipsing binaries masquerading as planet candidates in the Kepler planet candidate catalogs. We codify the manual vetting process for Kepler Objects of Interest (KOIs) described in Bryson et al. (2013) with a series of measurements and tests that can be performed algorithmically. We compare our automated results with a sample of manually vetted KOIs from the catalog of Burke et al. (2014) and find excellent agreement. We test the performance on a set of simulated transits and find our algorithm correctly identifies simulated false positives approximately 50 of the time, and correctly identifies 99 of simulated planet candidates.

  1. Robust EM Continual Reassessment Method in Oncology Dose Finding

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  2. Forecasting the Rupture Directivity of Large Earthquakes: Centroid Bias of the Conditional Hypocenter Distribution

    Science.gov (United States)

    Donovan, J.; Jordan, T. H.

    2012-12-01

    Forecasting the rupture directivity of large earthquakes is an important problem in probabilistic seismic hazard analysis (PSHA), because directivity is known to strongly influence ground motions. We describe how rupture directivity can be forecast in terms of the "conditional hypocenter distribution" or CHD, defined to be the probability distribution of a hypocenter given the spatial distribution of moment release (fault slip). The simplest CHD is a uniform distribution, in which the hypocenter probability density equals the moment-release probability density. For rupture models in which the rupture velocity and rise time depend only on the local slip, the CHD completely specifies the distribution of the directivity parameter D, defined in terms of the degree-two polynomial moments of the source space-time function. This parameter, which is zero for a bilateral rupture and unity for a unilateral rupture, can be estimated from finite-source models or by the direct inversion of seismograms (McGuire et al., 2002). We compile D-values from published studies of 65 large earthquakes and show that these data are statistically inconsistent with the uniform CHD advocated by McGuire et al. (2002). Instead, the data indicate a "centroid biased" CHD, in which the expected distance between the hypocenter and the hypocentroid is less than that of a uniform CHD. In other words, the observed directivities appear to be closer to bilateral than predicted by this simple model. We discuss the implications of these results for rupture dynamics and fault-zone heterogeneities. We also explore their PSHA implications by modifying the CyberShake simulation-based hazard model for the Los Angeles region, which assumed a uniform CHD (Graves et al., 2011).

  3. Candidate soil indicators for monitoring the progress of constructed wetlands toward a natural state: a statistical approach

    Science.gov (United States)

    Stapanian, Martin A.; Adams, Jean V.; Fennessy, M. Siobhan; Mack, John; Micacchion, Mick

    2013-01-01

    A persistent question among ecologists and environmental managers is whether constructed wetlands are structurally or functionally equivalent to naturally occurring wetlands. We examined 19 variables collected from 10 constructed and nine natural emergent wetlands in Ohio, USA. Our primary objective was to identify candidate indicators of wetland class (natural or constructed), based on measurements of soil properties and an index of vegetation integrity, that can be used to track the progress of constructed wetlands toward a natural state. The method of nearest shrunken centroids was used to find a subset of variables that would serve as the best classifiers of wetland class, and error rate was calculated using a five-fold cross-validation procedure. The shrunken differences of percent total organic carbon (% TOC) and percent dry weight of the soil exhibited the greatest distances from the overall centroid. Classification based on these two variables yielded a misclassification rate of 11% based on cross-validation. Our results indicate that % TOC and percent dry weight can be used as candidate indicators of the status of emergent, constructed wetlands in Ohio and for assessing the performance of mitigation. The method of nearest shrunken centroids has excellent potential for further applications in ecology.

  4. Large-scale structure of the Taurus molecular complex. II. Analysis of velocity fluctuations and turbulence. III. Methods for turbulence

    International Nuclear Information System (INIS)

    Kleiner, S.C.; Dickman, R.L.

    1985-01-01

    The velocity autocorrelation function (ACF) of observed spectral line centroid fluctuations is noted to effectively reproduce the actual ACF of turbulent gas motions within an interstellar cloud, thereby furnishing a framework for the study of the large scale velocity structure of the Taurus dark cloud complex traced by the present C-13O J = 1-0 observations of this region. The results obtained are discussed in the context of recent suggestions that widely observed correlations between molecular cloud widths and cloud sizes indicate the presence of a continuum of turbulent motions within the dense interstellar medium. Attention is then given to a method for the quantitative study of these turbulent motions, involving the mapping of a source in an optically thin spectral line and studying the spatial correlation properties of the resulting velocity centroid map. 61 references

  5. A method of retrieving cloud top height and cloud geometrical thickness with oxygen A and B bands for the Deep Space Climate Observatory (DSCOVR) mission: Radiative transfer simulations

    International Nuclear Information System (INIS)

    Yang, Yuekui; Marshak, Alexander; Mao, Jianping; Lyapustin, Alexei; Herman, Jay

    2013-01-01

    The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) was designed to measure the atmosphere and surface properties over the whole sunlit half of the Earth from the L1 Lagrangian point. It has 10 spectral channels ranging from the UV to the near-IR, including two pairs of oxygen (O 2 ) A-band (779.5 and 764 nm) and B-band (680 and 687.75 nm) reference and absorption channels selected for the cloud height measurements. This paper presents the radiative transfer analysis pertinent to retrieving cloud top height and cloud geometrical thickness with EPIC A- and B-band observations. Due to photon cloud penetration, retrievals from either O 2 A- or B-band channels alone gives the corresponding cloud centroid height, which is lower than the cloud top. However, we show both the sum and the difference between the retrieved cloud centroid heights in the A and B bands are functions of cloud top height and cloud geometrical thickness. Based on this fact, the paper develops a new method to retrieve cloud top height and cloud geometrical thickness simultaneously for fully cloudy scenes over ocean surface. First, cloud centroid heights are calculated for both A and B bands using the ratios between the reflectances of the absorbing and reference channels; then the cloud top height and the cloud geometrical thickness are retrieved from the two dimensional look up tables that relate the sum and the difference between the retrieved centroid heights for A and B bands to the cloud top height and the cloud geometrical thickness. This method is applicable for clouds thicker than an optical depth of 5. -- Highlights: ► EPIC onboard DSCOVR is equipped with O 2 A and B band channels. ► Photon cloud penetration depths of A and B bands contain information of cloud thickness. ► A method is developed to retrieve cloud top height and cloud geometrical thickness with EPIC O 2 A- and B-band

  6. Automated correlation and classification of secondary ion mass spectrometry images using a k-means cluster method.

    Science.gov (United States)

    Konicek, Andrew R; Lefman, Jonathan; Szakal, Christopher

    2012-08-07

    We present a novel method for correlating and classifying ion-specific time-of-flight secondary ion mass spectrometry (ToF-SIMS) images within a multispectral dataset by grouping images with similar pixel intensity distributions. Binary centroid images are created by employing a k-means-based custom algorithm. Centroid images are compared to grayscale SIMS images using a newly developed correlation method that assigns the SIMS images to classes that have similar spatial (rather than spectral) patterns. Image features of both large and small spatial extent are identified without the need for image pre-processing, such as normalization or fixed-range mass-binning. A subsequent classification step tracks the class assignment of SIMS images over multiple iterations of increasing n classes per iteration, providing information about groups of images that have similar chemistry. Details are discussed while presenting data acquired with ToF-SIMS on a model sample of laser-printed inks. This approach can lead to the identification of distinct ion-specific chemistries for mass spectral imaging by ToF-SIMS, as well as matrix-assisted laser desorption ionization (MALDI), and desorption electrospray ionization (DESI).

  7. Hybrid quantum and classical methods for computing kinetic isotope effects of chemical reactions in solutions and in enzymes.

    Science.gov (United States)

    Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu

    2008-01-01

    A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.

  8. The Expanded FindCore Method for Identification of a Core Atom Set for Assessment of Protein Structure Prediction

    Science.gov (United States)

    Snyder, David A.; Grullon, Jennifer; Huang, Yuanpeng J.; Tejero, Roberto; Montelione, Gaetano T.

    2014-01-01

    Maximizing the scientific impact of NMR-based structure determination requires robust and statistically sound methods for assessing the precision of NMR-derived structures. In particular, a method to define a core atom set for calculating superimpositions and validating structure predictions is critical to the use of NMR-derived structures as targets in the CASP competition. FindCore (D.A. Snyder and G.T. Montelione PROTEINS 2005;59:673–686) is a superimposition independent method for identifying a core atom set, and partitioning that set into domains. However, as FindCore optimizes superimposition by sensitively excluding not-well-defined atoms, the FindCore core may not comprise all atoms suitable for use in certain applications of NMR structures, including the CASP assessment process. Adapting the FindCore approach to assess predicted models against experimental NMR structures in CASP10 required modification of the FindCore method. This paper describes conventions and a standard protocol to calculate an “Expanded FindCore” atom set suitable for validation and application in biological and biophysical contexts. A key application of the Expanded FindCore method is to identify a core set of atoms in the experimental NMR structure for which it makes sense to validate predicted protein structure models. We demonstrate the application of this Expanded FindCore method in characterizing well-defined regions of 18 NMR-derived CASP10 target structures. The Expanded FindCore protocol defines “expanded core atom sets” that match an expert’s intuition of which parts of the structure are sufficiently well-defined to use in assessing CASP model predictions. We also illustrate the impact of this analysis on the CASP GDT assessment scores. PMID:24327305

  9. Simple method of obtaining the band strengths in the electronic spectra of diatomic molecules

    International Nuclear Information System (INIS)

    Gowda, L.S.; Balaji, V.N.

    1977-01-01

    It is shown that relative band strengths of diatomic molecules for which the product of Franck-Condon factor and r-centroid is approximately equal to 1 for (0,0) band can be determined by a simple method which is in good agreement with the smoothed array of experimental values. Such values for the Swan bands of the C 2 molecule are compared with the band strengths of the simple method. It is noted that the Swan bands are one of the outstanding features of R- and N-type stars and of the heads of comets

  10. A Review on Hot-IP Finding Methods and Its Application in Early DDoS Target Detection

    Directory of Open Access Journals (Sweden)

    Xuan Dau Hoang

    2016-10-01

    Full Text Available On the high-speed connections of the Internet or computer networks, the IP (Internet Protocol packet traffic passing through the network is extremely high, and that makes it difficult for network monitoring and attack detection applications. This paper reviews methods to find the high-occurrence-frequency elements in the data stream and applies the most efficient methods to find Hot-IPs that are high-frequency IP addresses of IP packets passing through the network. Fast finding of Hot-IPs in the IP packet stream can be effectively used in early detection of DDoS (Distributed Denial of Service attack targets and spreading sources of network worms. Research results show that the Count-Min method gives the best overall performance for Hot-IP detection thanks to its low computational complexity, low space requirement and fast processing speed. We also propose an early detection model of DDoS attack targets based on Hot-IP finding, which can be deployed on the target network routers.

  11. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.

    Science.gov (United States)

    Stępień, Grzegorz

    2018-03-17

    The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  12. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space

    Directory of Open Access Journals (Sweden)

    Grzegorz Stępień

    2018-03-01

    Full Text Available The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems—interior and exterior orientation of sensors—to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data. The accuracy of the results in the laboratory test is on the level of 10−6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author’s 2017 Total Free Station (TFS transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation—MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  13. An effective method for finding special solutions of nonlinear differential equations with variable coefficients

    International Nuclear Information System (INIS)

    Qin Maochang; Fan Guihong

    2008-01-01

    There are many interesting methods can be utilized to construct special solutions of nonlinear differential equations with constant coefficients. However, most of these methods are not applicable to nonlinear differential equations with variable coefficients. A new method is presented in this Letter, which can be used to find special solutions of nonlinear differential equations with variable coefficients. This method is based on seeking appropriate Bernoulli equation corresponding to the equation studied. Many well-known equations are chosen to illustrate the application of this method

  14. Bubble parameters analysis of gas-liquid two-phase sparse bubbly flow based on image method

    International Nuclear Information System (INIS)

    Zhou Yunlong; Zhou Hongjuan; Song Lianzhuang; Liu Qian

    2012-01-01

    The sparse rising bubbles of gas-liquid two-phase flow in vertical pipe were measured and studied based on image method. The bubble images were acquired by high-speed video camera systems, the characteristic parameters of bubbles were extracted by using image processing techniques. Then velocity variation of rising bubbles were drawn. Area and centroid variation of single bubble were also drawn. And then parameters and movement law of bubbles were analyzed and studied. The test results showed that parameters of bubbles had been analyzed well by using image method. (authors)

  15. BAYESIAN DATA AUGMENTATION DOSE FINDING WITH CONTINUAL REASSESSMENT METHOD AND DELAYED TOXICITY

    Science.gov (United States)

    Liu, Suyu; Yin, Guosheng; Yuan, Ying

    2014-01-01

    A major practical impediment when implementing adaptive dose-finding designs is that the toxicity outcome used by the decision rules may not be observed shortly after the initiation of the treatment. To address this issue, we propose the data augmentation continual re-assessment method (DA-CRM) for dose finding. By naturally treating the unobserved toxicities as missing data, we show that such missing data are nonignorable in the sense that the missingness depends on the unobserved outcomes. The Bayesian data augmentation approach is used to sample both the missing data and model parameters from their posterior full conditional distributions. We evaluate the performance of the DA-CRM through extensive simulation studies, and also compare it with other existing methods. The results show that the proposed design satisfactorily resolves the issues related to late-onset toxicities and possesses desirable operating characteristics: treating patients more safely, and also selecting the maximum tolerated dose with a higher probability. The new DA-CRM is illustrated with two phase I cancer clinical trials. PMID:24707327

  16. AN EFFICIENT DATA MINING METHOD TO FIND FREQUENT ITEM SETS IN LARGE DATABASE USING TR- FCTM

    Directory of Open Access Journals (Sweden)

    Saravanan Suba

    2016-01-01

    Full Text Available Mining association rules in large database is one of most popular data mining techniques for business decision makers. Discovering frequent item set is the core process in association rule mining. Numerous algorithms are available in the literature to find frequent patterns. Apriori and FP-tree are the most common methods for finding frequent items. Apriori finds significant frequent items using candidate generation with more number of data base scans. FP-tree uses two database scans to find significant frequent items without using candidate generation. This proposed TR-FCTM (Transaction Reduction- Frequency Count Table Method discovers significant frequent items by generating full candidates once to form frequency count table with one database scan. Experimental results of TR-FCTM shows that this algorithm outperforms than Apriori and FP-tree.

  17. Measuring the Alfvénic nature of the interstellar medium: Velocity anisotropy revisited

    International Nuclear Information System (INIS)

    Burkhart, Blakesley; Lazarian, A.; Leão, I. C.; De Medeiros, J. R.; Esquivel, A.

    2014-01-01

    The dynamics of the interstellar medium (ISM) are strongly affected by turbulence, which shows increased anisotropy in the presence of a magnetic field. We expand upon the Esquivel and Lazarian method to estimate the Alfvén Mach number using the structure function anisotropy in velocity centroid data from Position-Position-Velocity maps. We utilize three-dimensional magnetohydrodynamic simulations of fully developed turbulence, with a large range of sonic and Alfvénic Mach numbers, to produce synthetic observations of velocity centroids with observational characteristics such as thermal broadening, cloud boundaries, noise, and radiative transfer effects of carbon monoxide. In addition, we investigate how the resulting anisotropy-Alfvén Mach number dependency found in Esquivel and Lazarian might change when taking the second moment of the Position-Position-Velocity cube or when using different expressions to calculate the velocity centroids. We find that the degree of anisotropy is related primarily to the magnetic field strength (i.e., Alfvén Mach number) and the line-of-sight orientation, with a secondary effect on sonic Mach number. If the line of sight is parallel to up to ≈45 deg off of the mean field direction, the velocity centroid anisotropy is not prominent enough to distinguish different Alfvénic regimes. The observed anisotropy is not strongly affected by including radiative transfer, although future studies should include additional tests for opacity effects. These results open up the possibility of studying the magnetic nature of the ISM using statistical methods in addition to existing observational techniques.

  18. Using Machine Learning Methods Jointly to Find Better Set of Rules in Data Mining

    Directory of Open Access Journals (Sweden)

    SUG Hyontai

    2017-01-01

    Full Text Available Rough set-based data mining algorithms are one of widely accepted machine learning technologies because of their strong mathematical background and capability of finding optimal rules based on given data sets only without room for prejudiced views to be inserted on the data. But, because the algorithms find rules very precisely, we may confront with the overfitting problem. On the other hand, association rule algorithms find rules of association, where the association resides between sets of items in database. The algorithms find itemsets that occur more than given minimum support, so that they can find the itemsets practically in reasonable time even for very large databases by supplying the minimum support appropriately. In order to overcome the problem of the overfitting problem in rough set-based algorithms, first we find large itemsets, after that we select attributes that cover the large itemsets. By using the selected attributes only, we may find better set of rules based on rough set theory. Results from experiments support our suggested method.

  19. Self-consistent study of space-charge-dominated beams in a misaligned transport system

    International Nuclear Information System (INIS)

    Sing Babu, P.; Goswami, A.; Pandit, V.S.

    2013-01-01

    A self-consistent particle-in-cell (PIC) simulation method is developed to investigate the dynamics of space-charge-dominated beams through a misaligned solenoid based transport system. Evolution of beam centroid, beam envelope and emittance is studied as a function of misalignment parameters for various types of beam distributions. Simulation results performed up to 40 mA of proton beam indicate that centroid oscillations induced by the displacement and rotational misalignments of solenoids do not depend of the beam distribution. It is shown that the beam envelope around the centroid is independent of the centroid motion for small centroid oscillation. In addition, we have estimated the loss of beam during the transport caused by the misalignment for various beam distributions

  20. Analysis of fatigue resistance of continuous and non-continuous welded rectangular frame intersections by finite element method

    International Nuclear Information System (INIS)

    McCoy, M. L.; Moradi, R.; Lankarani, H. M.

    2011-01-01

    Agricultural and construction equipment are commonly implemented with rectangular tubing in their structural frame designs. A typical joining method to fabricate these frames is by welding and the use of ancillary structural plating at the connections. This aids two continuous members to pass through an intersection point of the frame with some degree of connectivity, but the connections are highly unbalanced as the tubing centroids exhibit asymmetry. Due to the practice of welded continuous member frame intersections in current agricultural equipment designs, a conviction may exist that welded continuous member frames are superior in structural strength over that of structural frame intersections implementing welded non-continuous members where the tubing centroids lie within two planes of symmetry, a connection design that would likely fabricating a more fatigue resistant structural frame. Three types of welded continuous tubing frame intersections currently observed in the designs of agricultural equipment were compared to two non-continuous frame intersection designs. Each design was subjected to the same loading condition and then examined for stress levels using the Finite Element Method to predict fatigue life. Results demonstrated that a lighter weight, non-continuous member frame intersection design was two magnitudes superior in fatigue resistance than some current implemented frame designs when using Stress-Life fatigue prediction methods and empirical fatigue strengths for fillet welds. Stress-Life predictions were also made using theoretical fatigue strength calculations for the fatigue strength at the welds for comparison to the empirical derived weld fatigue strength

  1. Study of position resolution for cathode readout MWPC with measurement of induced charge distribution

    International Nuclear Information System (INIS)

    Chiba, J.; Iwasaki, H.; Kageyama, T.; Kuribayashi, S.; Nakamura, K.; Sumiyoshi, T.; Takeda, T.

    1983-01-01

    A readout technqiue of multiwire proportional chambers by measurement of charges induced on cathode strips, orthogonal to anode wires, requires an algorithm to relate the measured charge distribution to the avalanche position. With given chamber parameters and under the influence of noise, resolution limits depend on the chosen algorithm. We have studied the position resolution obtained by the centroid method and by the charge-ratio method, both using three consecutive cathode strips. While the centroid method uses a single number, the center of gravity of the measured charges, the charge-ratio method uses the ratios of the charges Qsub(i-1)/Qsub(i) and Qsub(i+1)/Qsub(i) where Qsub(i) is the largest. To obtain a given resolution, the charge-ratio method generally allows wider cathode strips and therefore a smaller number of readout channels than the centroid method. (orig.)

  2. Measurement of the inertial properties of the Helios F-1 spacecraft

    Science.gov (United States)

    Gayman, W. H.

    1975-01-01

    A gravity pendulum method of measuring lateral moments of inertia of large structures with an error of less than 1% is outlined. The method is based on the fact that in a physical pendulum with a knife-edge support the distance from the axis of rotation to the system center of gravity determines the minimal period of oscillation and is equal to the system centroidal radius of gyration. The method is applied to results of a test procedure in which the Helios F-1 spacecraft was placed in a roll fixture with crossed flexure pivots as elastic constraints and system oscillation measurements were made with each of a set of added moment-of-inertia increments. Equations of motion are derived with allowance for the effect of the finite pivot radius and an error analysis is carried out to find the criterion for maximum accuracy in determining the square of the centroidal radius of gyration. The test procedure allows all measurements to be made with the specimen in upright position.

  3. CT findings of pancreatic carcinoma. Evaluation with the combined method of early enhancement CT and high dose enhancement CT

    International Nuclear Information System (INIS)

    Itoh, Shigeki; Endo, Tokiko; Isomura, Takayuki; Ishigaki, Takeo; Ikeda, Mitsuru; Senda, Kouhei.

    1995-01-01

    Computed tomographic (CT) findings of pancreatic ductal adenocarcinoma were studied with the combined method of early enhancement CT and high dose enhancement CT in 72 carcinomas. Common Findings were change in pancreatic contour, abnormal attenuation in a tumor and dilatation of the main pancreatic duct. The incidence of abnormal attenuation and dilatation of the main pancreatic duct and bile duct was constant regardless of tumor size. The finding of hypoattenuation at early enhancement CT was most useful for demonstrating a carcinoma. However, this finding was negative in ten cases, five of which showed inhomogenous hyperattenuation at high dose enhancement CT. The detection of change in pancreatic contour and dilatation of the main pancreatic duct was most frequent at high dose enhancement CT. The finding of change in pancreatic contour and/or abnormal attenuation in a tumor could be detected in 47 cases at plain CT, 66 at early enhancement CT and 65 at high dose enhancement CT. Since the four cases in which neither finding was detected by any CT method showed dilatated main pancreatic duct, there was no case without abnormal CT findings. This combined CT method will be a reliable diagnostic technique in the imaging of pancreatic carcinoma. (author)

  4. Suppressing carrier removal error in the Fourier transform method for interferogram analysis

    International Nuclear Information System (INIS)

    Fan, Qi; Yang, Hongru; Li, Gaoping; Zhao, Jianlin

    2010-01-01

    A new carrier removal method for interferogram analysis using the Fourier transform is presented. The proposed method can be used to suppress the carrier removal error as well as the spectral leakage error. First, the carrier frequencies are estimated with the spectral centroid of the up sidelobe of the apodized interferogram, and then the up sidelobe can be shifted to the origin in the frequency domain by multiplying the original interferogram by a constructed plane reference wave. The influence of the carrier frequencies without an integer multiple of the frequency interval and the window function for apodization of the interferogram can be avoided in our work. The simulation and experimental results show that this method is effective for phase measurement with a high accuracy from a single interferogram

  5. Combined effect of carnosol, rosmarinic acid and thymol on the oxidative stability of soybean oil using a simplex centroid mixture design.

    Science.gov (United States)

    Saoudi, Salma; Chammem, Nadia; Sifaoui, Ines; Jiménez, Ignacio A; Lorenzo-Morales, Jacob; Piñero, José E; Bouassida-Beji, Maha; Hamdi, Moktar; L Bazzocchi, Isabel

    2017-08-01

    Oxidation taking place during the use of oil leads to the deterioration of both nutritional and sensorial qualities. Natural antioxidants from herbs and plants are rich in phenolic compounds and could therefore be more efficient than synthetic ones in preventing lipid oxidation reactions. This study was aimed at the valorization of Tunisian aromatic plants and their active compounds as new sources of natural antioxidant preventing oil oxidation. Carnosol, rosmarinic acid and thymol were isolated from Rosmarinus officinalis and Thymus capitatus by column chromatography and were analyzed by nuclear magnetic resonance. Their antioxidant activities were measured by DPPH, ABTS and FRAP assays. These active compounds were added to soybean oil in different proportions using a simplex-centroid mixture design. Antioxidant activity and oxidative stability of oils were determined before and after 20 days of accelerated oxidation at 60 °C. Results showed that bioactive compounds are effective in maintaining oxidative stability of soybean oil. However, the binary interaction of rosmarinic acid and thymol caused a reduction in antioxidant activity and oxidative stability of soybean oil. Optimum conditions for maximum antioxidant activity and oxidative stability were found to be an equal ternary mixture of carnosol, rosmarinic acid and thymol. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  6. Methods for Finding Legacy Wells in Large Areas

    Energy Technology Data Exchange (ETDEWEB)

    Hammack, Richard W. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Veloski, Garret A. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Hodges, D. Greg [Fugro Airborne Surveys, Mississauga, ON (Canada); White, Jr., Curt M. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States)

    2016-06-16

    United States. When abandoned, many wells were not adequately sealed and now provide a potential conduit for the vertical movement of liquids and gases. Today, groundwater aquifers can be contaminated by surface pollutants flowing down wells or by deep, saline water diffusing upwards. Likewise, natural gas, carbon dioxide (CO2), or radon can travel upwards via these wells to endanger structures or human health on the surface. Recently, the need to find and plug wells has become critical with the advent of carbon dioxide injection into geologic formations for enhanced oil recovery (EOR) or carbon storage. The potential for natural gas or brine leakage through existing wells has also been raised as a concern in regions where shale resources are hydraulically fractured for hydrocarbon recovery. In this study, the National Energy Technology Laboratory (NETL) updated existing, effective well finding techniques to be able to survey large areas quickly using helicopter or ground-vehicle-mounted magnetometers, combined with mobile methane detection. For this study, magnetic data were collected using airborne and ground vehicles equipped with two boom-mounted magnetometers, or on foot using a hand-held magnetometer with a single sensor. Data processing techniques were employed to accentuate well-casing-type magnetic signatures. To locate wells with no magnetic signature (wells where the steel well casing had been removed), the team monitored for anomalous concentrations of methane, which could indicate migration of volatile compounds from deeper sedimentary strata along a well or fracture pathway. Methane measurements were obtained using the ALPIS DIfferential Absorption Lidar (DIAL) sensor for helicopter surveys and the Apogee leak detection system (LDS) for ground surveys. These methods were evaluated at a 100-year-old oilfield in Wyoming, where a helicopter magnetic survey accurately located 93% of visible wells. In addition, 20% of the wells found by the survey were

  7. Methods for Finding Legacy Wells in Large Areas

    Energy Technology Data Exchange (ETDEWEB)

    Hammack, Richard [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Veloski, Garret [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States); Hodges, D. Greg [Fugro Airborne Surveys, Mississauga, ON (Canada); White, Jr., Charles E. [National Energy Technology Lab. (NETL), Pittsburgh, PA, (United States)

    2016-06-16

    More than 10 million wells have been drilled during 150 years of oil and gas production in the United States. When abandoned, many wells were not adequately sealed and now provide a potential conduit for the vertical movement of liquids and gases. Today, groundwater aquifers can be contaminated by surface pollutants flowing down wells or by deep, saline water diffusing upwards. Likewise, natural gas, carbon dioxide (CO2), or radon can travel upwards via these wells to endanger structures or human health on the surface. Recently, the need to find and plug wells has become critical with the advent of carbon dioxide injection into geologic formations for enhanced oil recovery (EOR) or carbon storage. The potential for natural gas or brine leakage through existing wells has also been raised as a concern in regions where shale resources are hydraulically fractured for hydrocarbon recovery. In this study, the National Energy Technology Laboratory (NETL) updated existing, effective well finding techniques to be able to survey large areas quickly using helicopter or ground-vehicle-mounted magnetometers, combined with mobile methane detection. For this study, magnetic data were collected using airborne and ground vehicles equipped with two boom-mounted magnetometers, or on foot using a hand-held magnetometer with a single sensor. Data processing techniques were employed to accentuate well-casing-type magnetic signatures. To locate wells with no magnetic signature (wells where the steel well casing had been removed), the team monitored for anomalous concentrations of methane, which could indicate migration of volatile compounds from deeper sedimentary strata along a well or fracture pathway. Methane measurements were obtained using the ALPIS DIfferential Absorption Lidar (DIAL) sensor for helicopter surveys and the Apogee leak detection system (LDS) for ground surveys. These methods were evaluated at a 100-year-old oilfield in Wyoming, where a helicopter magnetic

  8. Description and pilot results from a novel method for evaluating return of incidental findings from next-generation sequencing technologies.

    Science.gov (United States)

    Goddard, Katrina A B; Whitlock, Evelyn P; Berg, Jonathan S; Williams, Marc S; Webber, Elizabeth M; Webster, Jennifer A; Lin, Jennifer S; Schrader, Kasmintan A; Campos-Outcalt, Doug; Offit, Kenneth; Feigelson, Heather Spencer; Hollombe, Celine

    2013-09-01

    The aim of this study was to develop, operationalize, and pilot test a transparent, reproducible, and evidence-informed method to determine when to report incidental findings from next-generation sequencing technologies. Using evidence-based principles, we proposed a three-stage process. Stage I "rules out" incidental findings below a minimal threshold of evidence and is evaluated using inter-rater agreement and comparison with an expert-based approach. Stage II documents criteria for clinical actionability using a standardized approach to allow experts to consistently consider and recommend whether results should be routinely reported (stage III). We used expert opinion to determine the face validity of stages II and III using three case studies. We evaluated the time and effort for stages I and II. For stage I, we assessed 99 conditions and found high inter-rater agreement (89%), and strong agreement with a separate expert-based method. Case studies for familial adenomatous polyposis, hereditary hemochromatosis, and α1-antitrypsin deficiency were all recommended for routine reporting as incidental findings. The method requires definition of clinically actionable incidental findings and provide documentation and pilot testing of a feasible method that is scalable to the whole genome.

  9. A hybrid method for accurate star tracking using star sensor and gyros.

    Science.gov (United States)

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  10. Hose-Modulation Instability of Laser Pulses in Plasmas

    International Nuclear Information System (INIS)

    Sprangle, P.; Krall, J.; Esarey, E.

    1994-01-01

    A laser pulse propagating in a uniform plasma or a preformed plasma density channel is found to undergo a combination of hose and modulation instabilities, provided the pulse centroid has an initial tilt. Coupled equations for the laser centroid and envelope are derived and solved for a finite-length laser pulse. Significant coupling between the centroid and the envelope, harmonic generation in the envelope, and strong modification of the wake field can occur. Methods to reduce the growth rate of the laser hose instability are demonstrated

  11. Multiresponse optimisation on biodiesel obtained through a ternary mixture of vegetable oil and animal fat: Simplex-centroid mixture design application

    International Nuclear Information System (INIS)

    Orives, Juliane Resges; Galvan, Diego; Coppo, Rodolfo Lopes; Rodrigues, Cezar Henrique Furtoso; Angilelli, Karina Gomes; Borsato, Dionísio

    2014-01-01

    Highlights: • Mixture experimental design was used which allowed evaluating various responses. • Predictive equation was presented that allows verifying the behavior of the mixtures. • The results depicted that the obtained biodiesel dispensed the use of any additives. - Abstract: The quality of biodiesel is a determining factor in its commercialisation, and parameters such as the Cold Filter Plugging Point (CFPP) and Induction Period (IP) determine its operability in engines on cold days and storage time, respectively. These factors are important in characterisation of the final product. A B100 biodiesel formulation was developed using a multiresponse optimisation, for which the CFPP and cost were minimised, and the IP and yield were maximised. The experiments were carried out according to a simplex-centroid mixture design using soybean oil, beef tallow, and poultry fat. The optimum formulation consisted of 50% soybean oil, 20% beef tallow, and 30% poultry fat and had CFPP values of 1.92 °C, raw material costs of US$ 903.87 ton −1 , an IP of 8.28 h, and a yield of 95.68%. Validation was performed in triplicate and the t-test indicated that there were no difference between the estimated and experimental values for none of the dependent variables, thus indicating efficiency of the joint optimisation in the biodiesel production process that met the criteria for CFPP and IP, as well as high yield and low cost

  12. Stochastic sampling of the RNA structural alignment space.

    Science.gov (United States)

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H

    2009-07-01

    A novel method is presented for predicting the common secondary structures and alignment of two homologous RNA sequences by sampling the 'structural alignment' space, i.e. the joint space of their alignments and common secondary structures. The structural alignment space is sampled according to a pseudo-Boltzmann distribution based on a pseudo-free energy change that combines base pairing probabilities from a thermodynamic model and alignment probabilities from a hidden Markov model. By virtue of the implicit comparative analysis between the two sequences, the method offers an improvement over single sequence sampling of the Boltzmann ensemble. A cluster analysis shows that the samples obtained from joint sampling of the structural alignment space cluster more closely than samples generated by the single sequence method. On average, the representative (centroid) structure and alignment of the most populated cluster in the sample of structures and alignments generated by joint sampling are more accurate than single sequence sampling and alignment based on sequence alone, respectively. The 'best' centroid structure that is closest to the known structure among all the centroids is, on average, more accurate than structure predictions of other methods. Additionally, cluster analysis identifies, on average, a few clusters, whose centroids can be presented as alternative candidates. The source code for the proposed method can be downloaded at http://rna.urmc.rochester.edu.

  13. APLIKASI METODE-METODE AGGLOMERATIVE DALAM ANALISIS KLASTER PADA DATA TINGKAT POLUSI UDARA

    Directory of Open Access Journals (Sweden)

    Dewi Rachmatin

    2014-09-01

    Full Text Available ABSTRAK   Analisis Klaster merupakan analisis pengelompokkan data yang mengelompokkan data berdasarkan informasi yang ditemukan pada data. Tujuan dari analisis klaster adalah agar objek-objek di dalam satu kelompok memiliki kesamaan satu sama lain sedangkan dengan objek-objek yang berbeda kelompok memiliki perbedaan. Analisis klaster dibagi menjadi dua metode yaitu metode hirarki dan metode non-hirarki. Metode hirarki dibagi menjadi dua, yaitu metode agglomerative (pemusatan dan metode divisive (penyebaran. Metode-metode yang termasuk dalam metode agglomerative adalah Single Linkage Method, Complete Linkage Method, Average Linkage Method, Ward’s Method, Centroid Method dan Median Method. Pada artikel ini dibahas metode-metode agglomerative tersebut yang diterapkan pada data tingkat polusi udara. Masing-masing metode tersebut memberikan jumlah klaster yang berbeda.   Kata Kunci : Analisis Klaster, Single Linkage Method, Complete Linkage Method, Average Linkage Method, Ward’s Method, Centroid Method dan Median Method.     ABSTRACT Cluster analysis is an analysis of the data classification based on information found in the data.The objective of cluster analysis is that the objects in the group have in common with each other, while the different objects have different groups. Cluster analysis is divided into two methods : the method of non-hierarchical and hierarchical methods.Hierarchical method is divided into two methods, namely agglomerative methods (concentration and divisive methods (deployment. The methods included in the agglomerative method is Single Linkage Method, Complete Linkage Method, Average Linkage Method, Ward 's Method, Method and Median Centroid Method. In this article discussed the agglomerative methods were applied to the data rate of air pollution. Each of these methods provides a different number of clusters.   Keywords: Cluster Analysis , Single Linkage Method, Complete Linkage Method, Average Linkage Method, Ward

  14. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    Science.gov (United States)

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  15. Toward regional corrections of long period CMT inversions using InSAR

    Science.gov (United States)

    Shakibay Senobari, N.; Funning, G.; Ferreira, A. M.

    2017-12-01

    One of InSAR's main strengths, with respect to other methods of studying earthquakes, is finding the accurate location of the best point source (or `centroid') for an earthquake. While InSAR data have great advantages for study of shallow earthquakes, the number of earthquakes for which we have InSAR data is low, compared with the number of earthquakes recorded seismically. And though improvements to SAR satellite constellations have enhanced the use of InSAR data during earthquake response, post-event data still have a latency on the order of days. On the other hand, earthquake centroid inversion methods using long period seismic data (e.g. the Global CMT method) are fast but include errors caused by inaccuracies in both the Earth velocity model and in wave propagation assumptions (e.g. Hjörleifsdóttir and Ekström, 2010; Ferreira and Woodhouse, 2006). Here we demonstrate a method that combines the strengths of both methods, calculating regional travel-time corrections for long-period waveforms using accurate centroid locations from InSAR, then applying these to other events that occur in the same region. Our method is based on the observation that synthetic seismograms produced from InSAR source models and locations match the data very well except for some phase shifts (travel time biases) between the two waveforms, likely corresponding to inaccuracies in Earth velocity models (Weston et al., 2014). Our previous work shows that adding such phase shifts to the Green's functions can improve the accuracy of long period seismic CMT inversions by reducing tradeoffs between the moment tensor components and centroid location (e.g. Shakibay Senobari et al., AGU Fall Meeting 2015). Preliminary work on several pairs of neighboring events (e.g. Landers-Hector Mine, the 2000 South Iceland earthquake sequences) shows consistent azimuthal patterns of these phase shifts for nearby events at common stations. These phase shift patterns strongly suggest that it is possible to

  16. A simple method for finding explicit analytic transition densities of diffusion processes with general diploid selection.

    Science.gov (United States)

    Song, Yun S; Steinrücken, Matthias

    2012-03-01

    The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.

  17. Ship detection in Sentinel-1 imagery using the h-dome transformation

    CSIR Research Space (South Africa)

    Schwegmann, CP

    2015-07-01

    Full Text Available is then processed to detect cluster centroids which indicate the ships’ positions. The following sections detail this procedure. 3.1. H-dome transform The H-dome transform is a method for finding local maxima, often used in the medical field for finding sub-cellular...) and comparing it to Fig. 2 (d) we no- tice that the brightest section of the ship can be seen much more clearly in (d). This is due to the property of the H- dome transform to highlight structures not typically visible (such as sub-cellular structures in [8...

  18. Deconvolution of gamma energy spectra from NaI (Tl) detector using the Nelder-Mead zero order optimisation method

    International Nuclear Information System (INIS)

    RAVELONJATO, R.H.M.

    2010-01-01

    The aim of this work is to develop a method for gamma ray spectrum deconvolution from NaI(Tl) detector. Deconvolution programs edited with Matlab 7.6 using Nelder-Mead method were developed to determine multiplet shape parameters. The simulation parameters were: centroid distance/FWHM ratio, Signal/Continuum ratio and counting rate. The test using synthetic spectrum was built with 3σ uncertainty. The tests gave suitable results for centroid distance/FWHM ratio≥2, Signal/Continuum ratio ≥2 and counting level 100 counts. The technique was applied to measure the activity of soils and rocks samples from the Anosy region. The rock activity varies from (140±8) Bq.kg -1 to (190±17)Bq.kg -1 for potassium-40; from (343±7)Bq.Kg -1 to (881±6)Bq.kg -1 for thorium-213 and from (100±3)Bq.kg -1 to (164 ±4) Bq.kg -1 for uranium-238. The soil activity varies from (148±1) Bq.kg -1 to (652±31)Bq.kg -1 for potassium-40; from (1100±11)Bq.kg -1 to (5700 ± 40)Bq.kg -1 for thorium-232 and from (190 ±2) Bq.kg -1 to (779 ±15) Bq -1 for uranium -238. Among 11 samples, the activity value discrepancies compared to high resolution HPGe detector varies from 0.62% to 42.86%. The fitting residuals are between -20% and +20%. The Figure of Merit values are around 5%. These results show that the method developed is reliable for such activity range and the convergence is good. So, NaI(Tl) detector combined with deconvolution method developed may replace HPGe detector within an acceptable limit, if the identification of each nuclides in the radioactive series is not required [fr

  19. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    Directory of Open Access Journals (Sweden)

    I. Crawford

    2015-11-01

    underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.

  20. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  1. The extraction of lifetimes of weakly-populated nuclear levels in recoil distance method experiments

    International Nuclear Information System (INIS)

    Kennedy, D.L.; Stuchbery, A.E.; Bolotin, H.H.

    1979-01-01

    Two analytic techniques are described which extend the conventional analysis of recoil-distance method (RDM) data. The first technique utilizes the enhanced counting statistics of the composite spectrum formed by the addition of all γ-ray spectra recorded at the different target-to-stopper distances employed, in order to extract the lifetimes of levels whose observed depopulating γ-ray transitions have insufficient statistics to permit conventional analysis. The second technique analyses peak centroids rather than peak areas to account for contamination by flight distance dependent background. The results from a recent study of the low-lying excited states in 196 198 Pt for those levels whose lifetimes could be extracted by conventional RDM analysis are shown to be in good agreement with those obtained using the new methods of analysis

  2. Uni and multivariate methods applied to studies of phenotypic adaptability in maize (Zea mays L.=Métodos uni e multivariados aplicados em estudos de adaptabilidade fenotípica em milho (Zea mays L..

    Directory of Open Access Journals (Sweden)

    Antonio Carlos Gerage

    2011-10-01

    Full Text Available The objective of this study was to evaluate the performance of 15 maize cultivars in seven locations in Paraná State, Brazil. Towards this aim, grain yield trials were conducted during two crop seasons, and centroid (multivariate and bissegmented regression (univariate methods were used to evaluate possible divergences among results obtained. The genotypes were evaluated in randomized complete blocks with three replications. The centroid method was effective for indicating productive potential of genotypes, allowing for classification of genotype adaptability and stability. Values of probability above 0.40 allowed more reliable genotype classification for both adaptability and stability. The STRIKE genotype presented wide adaptability and stability by both the centroid and bissegmented regression methods. The SHS 4040 and CD 306 genotypes were not indicated for planting, considering the tested environments. O objetivo deste estudo foi avaliar, por meio de ensaios de rendimento grãos, o desempenho de 15 híbridos de milho em sete locais no Estado de Paraná e em duas safras, pelos métodos centróide (multivariado e regressão bissegmentada (univariado, e avaliar possíveis divergências entre resultados obtidos por meio das duas metodologias. Os genótipos foram avaliados em delineamento de blocos completos casualizados com três repetiçõess. O método centróide foi efetivo na indicação de genótipos quanto o seu potencial produtivo, permitindo classificar genótipos não só quanto a adaptabilidade mas também quanto a estabilidade. Valores de probabilidade acima de 0,40 permitem classificar com maior confiabilidade os genótipos não só quanto à adaptabilidade, como também quanto à estabilidade. O genótipo STRIKE apresentou ampla adaptabilidade e estabilidade pelos métodos centróide e regressão bissegmentada. O genótipo SHS 4040 e CD 306 não devem ser indicados para cultivo, tomando-se como referência os ambientes testados.

  3. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    Science.gov (United States)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  4. "Expectations to Change" ((E2C): A Participatory Method for Facilitating Stakeholder Engagement with Evaluation Findings

    Science.gov (United States)

    Adams, Adrienne E.; Nnawulezi, Nkiru A.; Vandenberg, Lela

    2015-01-01

    From a utilization-focused evaluation perspective, the success of an evaluation is rooted in the extent to which the evaluation was used by stakeholders. This paper details the "Expectations to Change" (E2C) process, an interactive, workshop-based method designed to engage primary users with their evaluation findings as a means of…

  5. Cooling as a method of finding topological dislocations in lattice models

    International Nuclear Information System (INIS)

    Gomberoff, K.

    1989-01-01

    It is well known that the O(3) two-dimensional model has configurations with topological charge Q=1 and action S/sub min/=6.69. Since the exponent characterizing the renormalization-group behavior of this model is 4π such configurations invalidate the standard scaling behavior of the topological susceptibility. The analog exponent for the four-dimensional lattice SU(2) gauge model is 10.77. If there would exist configurations with Q=1 and S<10.77 in this model, they would invalidate the standard scaling behavior of its topological susceptibility. Kremer et al. have calculated the action of different configurations during cooling runs. They report that they do not find any configuration with S<12.7 and Q=1. I show that in the O(3) two-dimensional model cooling runs fail to uncover the well-known configurations with S<8. We conclude that the cooling method is not effective in uncovering the smallest action configurations in the Q=1 sector

  6. Radiologic findings of sacroiliitis : emphasis on MR findings

    International Nuclear Information System (INIS)

    Yang, Ik; Park, Hai Jung; Lee, Yul; Chung, Soo Young; Park, Jong Ho

    1997-01-01

    To compare the characteristic MR findings of infectious sacroiliitis (IS) and ankylosing spondylitis (AS). We retrospectively reviewed MR findings in eight patients with IS (pyogenic in six, tuberculosis in two) confirmed by culture and clinical follow-up, and in six with AS by HLA-B27 typing. AA control group of 13 asymptomatic volunteers was formed, and they underwent MRI Findings were analysed for morphology, degree of bone erosion, and adjacent soft tissue change. CT findings of AS in four patients and IS in four were also compared to MR findings. MR characteristics of IS included unilaterality (100%), abnormal cartilage signal intensity (100%), bone marrow change (100%), contrast enhancement (100%), erosion (63%), and soft tissue change (63%). MR findings of AS showed bilaterality (67%), abnormal cartilage signal intensity (80%), bone marrow change (80%), erosion (80%), contrast enhancement (44%) and soft tissue change (10%). CT scan showed bony sclerosis and erosion (86%), and abnormal joint space (71%). MR findings of sacroiliitis were loss of thin zone of a cartilage and erosions on T1-weighted image, and increased signal intensity on T2-weighted image. MRI is regarded as a useful diagnostic method where conventional diagnosis is difficult, and is able to image cartilage abnormalities directly and noninvasively. Significant differences in MR findings between IS and AS were not noted, however

  7. Methods to Find the Number of Latent Skills

    Science.gov (United States)

    Beheshti, Behzad; Desmarais, Michel C.; Naceur, Rhouma

    2012-01-01

    Identifying the skills that determine the success or failure to exercises and question items is a difficult task. Multiple skills may be involved at various degree of importance, and skills may overlap and correlate. In an effort towards the goal of finding the skills behind a set of items, we investigate two techniques to determine the number of…

  8. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    Science.gov (United States)

    Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.

    1992-06-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.

  9. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Leung, T.C.; Lynn, K.G.; Nielsen, B.; Forcier, M.P.; Weinberg, Z.A.; Rubloff, G.W.

    1992-01-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions

  10. A new method of spatio-temporal topographic mapping by correlation coefficient of K-means cluster.

    Science.gov (United States)

    Li, Ling; Yao, Dezhong

    2007-01-01

    It would be of the utmost interest to map correlated sources in the working human brain by Event-Related Potentials (ERPs). This work is to develop a new method to map correlated neural sources based on the time courses of the scalp ERPs waveforms. The ERP data are classified first by k-means cluster analysis, and then the Correlation Coefficients (CC) between the original data of each electrode channel and the time course of each cluster centroid are calculated and utilized as the mapping variable on the scalp surface. With a normalized 4-concentric-sphere head model with radius 1, the performance of the method is evaluated by simulated data. CC, between simulated four sources (s (1)-s (4)) and the estimated cluster centroids (c (1)-c (4)), and the distances (Ds), between the scalp projection points of the s (1)-s (4) and that of the c (1)-c (4), are utilized as the evaluation indexes. Applied to four sources with two of them partially correlated (with maximum mutual CC = 0.4892), CC (Ds) between s (1)-s (4) and c (1)-c (4) are larger (smaller) than 0.893 (0.108) for noise levels NSRclusters located at left, right occipital and frontal. The estimated vectors of the contra-occipital area demonstrate that attention to the stimulus location produces increased amplitude of the P1 and N1 components over the contra-occipital scalp. The estimated vector in the frontal area displays two large processing negativity waves around 100 ms and 250 ms when subjects are attentive, and there is a small negative wave around 140 ms and a P300 when subjects are unattentive. The results of simulations and real Visual Evoked Potentials (VEPs) data demonstrate the validity of the method in mapping correlated sources. This method may be an objective, heuristic and important tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.

  11. Development and optimization of a mixed beverage made of whey and water-soluble soybean extract flavored with chocolate using a simplex-centroid design

    Directory of Open Access Journals (Sweden)

    Dóris Faria de OLIVEIRA

    2017-10-01

    Full Text Available Abstract This study aimed to combine the nutritional advantages of whey and soybean by developing a type of chocolate beverage with water-soluble soybean extract dissolved in whey. Different concentrations of thickeners (carrageenan, pectin and starch – maximum level of 500 mg.100 mL-1 were tested by a simplex-centroid design. Several physicochemical, rheological, and sensory properties of the beverages were measured and a multi-response optimization was conducted aiming to obtain a whey and soybean beverage with increased overall sensory impression and maximum purchase intention. Beverages presented mean protein levels higher than 3.1 g.100 mL-1, a low content of lipids (< 2 g.100 mL-1 and total soluble solids ≥20 g.100 mL-1. Response surface methodology was applied and the proposed for overall impression and purchase intention presented R2=0.891 and R2=0.966, respectively. The desirability index (d-value=0.92 showed that the best formulation should contain 46% carrageenan and 54% pectin in the formulation. The formulation manufactured with this combination of thickeners was tested and the overall impression was 7.11±1.09 (over a 9-point hedonic scale and the purchase intention was 4.0±1.3 (over a 5-point hedonic scale, thus showing that the proposed models were predictive.

  12. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  13. A computer program for automatic gamma-ray spectra analysis

    International Nuclear Information System (INIS)

    Hiromura, Kazuyuki

    1975-01-01

    A computer program for automatic analysis of gamma-ray spectra obtained with a Ge(Li) detector is presented. The program includes a method by comparing the successive values of experimental data for the automatic peak finding and method of leastsquares for the peak fitting. The peak shape in the fitting routine is a 'modified Gaussian', which consists of two different Gaussians with the same height joined at the centroid. A quadratic form is chosen as a function representing the background. A maximum of four peaks can be treated in the fitting routine by the program. Some improvements in question are described. (auth.)

  14. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  15. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2011-01-15

    Purpose: To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. Methods: The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. Results: In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 deg., respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78{+-}0.57) mm or less. The {theta} and {phi} angle errors were found to be (5.7{+-}4.9) deg. and (6.0{+-}4.1) deg., respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 deg. compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. Conclusions: This work describes a novel, accurate, and completely automatic method for reconstructing

  16. Non-Hierarchical Clustering as a method to analyse an open-ended ...

    African Journals Online (AJOL)

    Apple

    Keywords: algebraic thinking; cluster analysis; mathematics education; quantitative analysis. Introduction. Extensive ..... C1, C2 and C3 represent the three centroids of the three clusters formed. .... 6ALd. All these strategies are algebraic and 'high- ... 1995), of the didactical aspects related to teaching .... Brazil, 18-23 July.

  17. A New Soft Computing Method for K-Harmonic Means Clustering.

    Science.gov (United States)

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  18. Investigation of the Relationship Between Gross Tumor Volume Location and Pneumonitis Rates Using a Large Clinical Database of Non-Small-Cell Lung Cancer Patients

    International Nuclear Information System (INIS)

    Vinogradskiy, Yevgeniy; Tucker, Susan L.; Liao Zhongxing; Martel, Mary K.

    2012-01-01

    Purpose: Studies have suggested that function may vary throughout the lung, and that patients who have tumors located in the base of the lung are more susceptible to radiation pneumonitis. The purpose of our study was to investigate the relationship between gross tumor volume (GTV) location and pneumonitis rates using a large clinical database of 547 patients with non–small-cell lung cancer. Methods and Materials: The GTV centroids of all patients were mapped onto one common coordinate system, in which the boundaries of the coordinate system were defined by the extreme points of each individual patient lung. The data were qualitatively analyzed by graphing all centroids and displaying the data according to the presence of severe pneumonitis, tumor stage, and smoking status. The centroids were grouped according to superior–inferior segments, and the pneumonitis rates were analyzed. In addition, we incorporated the GTV centroid information into a Lyman–Kutcher–Burman normal tissue complication probability model and tested whether adding spatial information significantly improved the fit of the model. Results: Of the 547 patients analyzed, 111 (20.3%) experienced severe radiation pneumonitis. The pneumonitis incidence rates were 16%, 23%, and 21% for the superior, middle, and inferior thirds of the lung, respectively. Qualitatively, the GTV centroids of nonsmokers were notably absent from the superior portion of the lung. In addition, the GTV centroids of patients who had Stage III and IV clinical staging were concentrated toward the medial edge of the lung. The comparison between the GTV centroid model and the conventional dose–volume model did not yield a statistically significant difference in model fit. Conclusions: Lower pneumonitis rates were noted for the superior portion of the lung; however the differences were not statistically significant. For our patient cohort, incorporating GTV centroid information did not lead to a statistically significant

  19. Research on the method of information system risk state estimation based on clustering particle filter

    Directory of Open Access Journals (Sweden)

    Cui Jia

    2017-05-01

    Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  20. Research on the method of information system risk state estimation based on clustering particle filter

    Science.gov (United States)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  1. Statistical properties of many-particle spectra. IV. New ensembles by Stieltjes transform methods

    International Nuclear Information System (INIS)

    Pandey, A.

    1981-01-01

    New Gaussian matrix ensembles, with arbitrary centroids and variances for the matrix elements, are defined as modifications of the three standard ones: GOE, GUE and GSE. The average density and two-point correlation function are given in the general case in terms of the corresponding Stieltjes transforms, first used by Pastur for the density. It is shown for the centroid-modified ensemble K+αH that when the operator K preserves the underlying symmetries of the standard ensemble H, then, as the magnitude of α grows, the transition of the fluctuations to those of H is very rapid and discontinuous in the limit of asymptotic dimensionality. Corresponding results are found for other ensembles. A similar Dyson result for the effects of the breaking of a model symmetry on the fluctuations is generalized to any model symmetry, as well as to the fundamental symmetries such as time-reversed invariance

  2. Tumor and normal tissue motion in the thorax during respiration: Analysis of volumetric and positional variations using 4D CT

    International Nuclear Information System (INIS)

    Weiss, Elisabeth; Wijesooriya, Krishni; Dill, S. Vaughn; Keall, Paul J.

    2007-01-01

    Purpose: To investigate temporospatial variations of tumor and normal tissue during respiration in lung cancer patients. Methods and Materials: In 14 patients, gross tumor volume (GTV) and normal tissue structures were manually contoured on four-dimensional computed tomography (4D-CT) scans. Structures were evaluated for volume changes, centroid (center of mass) motion, and phase dependence of variations relative to inspiration. Only volumetrically complete structures were used for analysis (lung in 2, heart in 8, all other structures in >10 patients). Results: During respiration, the magnitude of contoured volumes varied up to 62.5% for GTVs, 25.5% for lungs, and 12.6% for hearts. The range of maximum three-dimensional centroid movement for individual patients was 1.3-24.0 mm for GTV, 2.4-7.9 mm for heart, 5.2-12.0 mm for lungs, 0.3-5.5 mm for skin markers, 2.9-10.0 mm for trachea, and 6.6-21.7 mm for diaphragm. During respiration, the centroid positions of normal structures varied relative to the centroid position of the respective GTV by 1.5-8.1 mm for heart, 2.9-9.3 mm for lungs, 1.2-9.2 mm for skin markers, 0.9-7.1 mm for trachea, and 2.7-16.4 mm for diaphragm. Conclusion: Using 4D-CT, volumetric changes, positional alterations as well as changes in the position of contoured structures relative to the GTV were observed with large variations between individual patients. Although the interpretation of 4D-CT data has considerable uncertainty because of 4D-CT artifacts, observer variations, and the limited acquisition time, the findings might have a significant impact on treatment planning

  3. Finding function: evaluation methods for functional genomic data

    Directory of Open Access Journals (Sweden)

    Barrett Daniel R

    2006-07-01

    Full Text Available Abstract Background Accurate evaluation of the quality of genomic or proteomic data and computational methods is vital to our ability to use them for formulating novel biological hypotheses and directing further experiments. There is currently no standard approach to evaluation in functional genomics. Our analysis of existing approaches shows that they are inconsistent and contain substantial functional biases that render the resulting evaluations misleading both quantitatively and qualitatively. These problems make it essentially impossible to compare computational methods or large-scale experimental datasets and also result in conclusions that generalize poorly in most biological applications. Results We reveal issues with current evaluation methods here and suggest new approaches to evaluation that facilitate accurate and representative characterization of genomic methods and data. Specifically, we describe a functional genomics gold standard based on curation by expert biologists and demonstrate its use as an effective means of evaluation of genomic approaches. Our evaluation framework and gold standard are freely available to the community through our website. Conclusion Proper methods for evaluating genomic data and computational approaches will determine how much we, as a community, are able to learn from the wealth of available data. We propose one possible solution to this problem here but emphasize that this topic warrants broader community discussion.

  4. Cochrane Qualitative and Implementation Methods Group guidance series-paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings.

    Science.gov (United States)

    Noyes, Jane; Booth, Andrew; Flemming, Kate; Garside, Ruth; Harden, Angela; Lewin, Simon; Pantoja, Tomas; Hannes, Karin; Cargo, Margaret; Thomas, James

    2018-05-01

    The Cochrane Qualitative and Implementation Methods Group develops and publishes guidance on the synthesis of qualitative and mixed-method implementation evidence. Choice of appropriate methodologies, methods, and tools is essential when developing a rigorous protocol and conducting the synthesis. Cochrane authors who conduct qualitative evidence syntheses have thus far used a small number of relatively simple methods to address similarly written questions. Cochrane has invested in methodological work to develop new tools and to encourage the production of exemplar reviews to show the value of more innovative methods that address a wider range of questions. In this paper, in the series, we report updated guidance on the selection of tools to assess methodological limitations in qualitative studies and methods to extract and synthesize qualitative evidence. We recommend application of Grades of Recommendation, Assessment, Development, and Evaluation-Confidence in the Evidence from Qualitative Reviews to assess confidence in qualitative synthesized findings. This guidance aims to support review authors to undertake a qualitative evidence synthesis that is intended to be integrated subsequently with the findings of one or more Cochrane reviews of the effects of similar interventions. The review of intervention effects may be undertaken concurrently with or separate to the qualitative evidence synthesis. We encourage further development through reflection and formal testing. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  6. Ellipsoid analysis of calvarial shape.

    Science.gov (United States)

    Jacobsen, Petra A; Becker, Devra; Govier, Daniel P; Krantz, Steven G; Kane, Alex

    2009-09-01

    The purpose of this research was to develop a novel quantitative method of describing calvarial shape by using ellipsoid geometry. The pilot application of Ellipsoid Analysis was to compare calvarial form among individuals with untreated unilateral coronal synostosis, metopic synostosis, and sagittal synostosis and normal subjects. The frontal, parietal, and occipital bones of 10 preoperative patients for each of the four study groups were bilaterally segmented into six regions using three-dimensional skull reconstructions generated by ANALYZE imaging software from high-resolution computed tomography scans. Points along each segment were extracted and manipulated using a MATLAB-based program. The points were fit to the least-squares nearest ellipsoid. Relationships between the six resultant right and left frontal, parietal, and occipital ellipsoidal centroids (FR, FL, PR, PL, OR, and OL, respectively) were tested for association with a synostotic group. Results from the pilot study showed meaningful differences between length ratio, angular, and centroid distance relationships among synostotic groups. The most substantial difference was exhibited in the centroid distance PL-PR between patients with sagittal synostosis and metopic synostosis. The measures most commonly significant were centroid distances FL-PR and FL-PL and the angle OR-FR-PR. Derived centroid relationships were reproducible. Ellipsoid Analysis may offer a more refined approach to quantitative analysis of cranial shape. Symmetric and asymmetric forms can be compared directly. Relevant shape information between traditional landmarks is characterized. These techniques may have wider applicability in quantifying craniofacial morphology with increase in both specificity and general applicability over current methods.

  7. Electrocution of Raptors on Power Lines: A Review of Necropsy Methods and Findings.

    Science.gov (United States)

    Kagan, R A

    2016-09-01

    Decades after the problem was first identified, power line electrocution continues to be a cause of avian mortality. Currently, several federal laws protect eagles and other migratory birds, meaning that utility companies may be liable for electrocution-related deaths. Veterinarians and veterinary pathologists called upon to diagnose and treat electrocuted birds should keep this in mind when conducting clinical and postmortem examinations. This review details necropsy findings and methods used to diagnose electrocution. A combination of gross, subgross, and radiographic examinations can aid in identification of subtle injury. Diagnosis is made based on the presence of skin and/or feather burns. Other necropsy findings may include skin lacerations, subcutaneous burns, bruising, limb avulsion, hemopericardium, and vascular rupture. At the US Fish and Wildlife Service's National Forensics Laboratory, from 2000 to 2015, 417 raptor deaths were determined to have been caused by electrocution. Bald eagles and golden eagles were the most commonly submitted species. In a retrospective review of 377 cases, for which whole bodies were submitted, 18% of the electrocuted birds had only a single, small (less than 3 cm in diameter) external burn. Small, isolated burns tended to occur on the undersides of the wings at and distal to the elbow and on the lower legs and feet. These areas should be most carefully examined in cases where electrocution injury is not immediately apparent. © The Author(s) 2016.

  8. Estimating the accuracy of geographical imputation

    Directory of Open Access Journals (Sweden)

    Boscoe Francis P

    2008-01-01

    Full Text Available Abstract Background To reduce the number of non-geocoded cases researchers and organizations sometimes include cases geocoded to postal code centroids along with cases geocoded with the greater precision of a full street address. Some analysts then use the postal code to assign information to the cases from finer-level geographies such as a census tract. Assignment is commonly completed using either a postal centroid or by a geographical imputation method which assigns a location by using both the demographic characteristics of the case and the population characteristics of the postal delivery area. To date no systematic evaluation of geographical imputation methods ("geo-imputation" has been completed. The objective of this study was to determine the accuracy of census tract assignment using geo-imputation. Methods Using a large dataset of breast, prostate and colorectal cancer cases reported to the New Jersey Cancer Registry, we determined how often cases were assigned to the correct census tract using alternate strategies of demographic based geo-imputation, and using assignments obtained from postal code centroids. Assignment accuracy was measured by comparing the tract assigned with the tract originally identified from the full street address. Results Assigning cases to census tracts using the race/ethnicity population distribution within a postal code resulted in more correctly assigned cases than when using postal code centroids. The addition of age characteristics increased the match rates even further. Match rates were highly dependent on both the geographic distribution of race/ethnicity groups and population density. Conclusion Geo-imputation appears to offer some advantages and no serious drawbacks as compared with the alternative of assigning cases to census tracts based on postal code centroids. For a specific analysis, researchers will still need to consider the potential impact of geocoding quality on their results and evaluate

  9. Fabrication Aware Form-finding

    DEFF Research Database (Denmark)

    Egholm Pedersen, Ole; Larsen, Niels Martin; Pigram, Dave

    2014-01-01

    This paper describes a design and construction method that combines two distinct material systems with fabrication aware form-finding and file-to-factory workflows. The method enables the fluent creation of complex materially efficient structures comprising high populations of geometrically uniqu...

  10. Impact of Optics on CSR-Related Emittance Growth in Bunch Compressor Chicanes

    CERN Document Server

    Limberg, Torsten

    2005-01-01

    The dependence of emittance growth due to Coherent Synchrotron Radiation (CSR) in bunch compressor chicanes on optics has been noticed and empirically studied in the past. We revisit the subject, suggesting a model to explain slice emittance growth dependence on chicane optics. A simplified model to calculate projected emittance growth when it is mainly caused by transverse slice centroid offsets is presented. It is then used to find optimal compensation of centroid kicks in the single chicanes of a two-stage compression system by adjusting the phase advance of the transport in between and the ration of the compression factors.

  11. Transition from weak to strong measurements by nonlinear quantum feedback control

    International Nuclear Information System (INIS)

    Zhang Jing; Liu Yuxi; Wu Rebing; Li Chunwen; Tarn, Tzyh-Jong

    2010-01-01

    We find that feedback control may induce 'pseudo'-nonlinear dynamics in a damped harmonic oscillator, whose centroid trajectory in the phase space behaves like a classical nonlinear system. Thus, similar to nonlinear amplifiers (e.g., rf-driven Josephson junctions), feedback control on the harmonic oscillator can induce nonlinear bifurcation, which can be used to amplify small signals and further to measure quantum states of qubits. Using the cavity QED and the circuit QED systems as examples, we show how to apply our method to measuring the states of two-level atoms and superconducting charge qubits.

  12. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  13. A genetic-algorithm-based method to find unitary transformations for any desired quantum computation and application to a one-bit oracle decision problem

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Jeongho [Seoul National University, Seoul (Korea, Republic of); Hanyang University, Seoul (Korea, Republic of); Yoo, Seokwon [Hanyang University, Seoul (Korea, Republic of)

    2014-12-15

    We propose a genetic-algorithm-based method to find the unitary transformations for any desired quantum computation. We formulate a simple genetic algorithm by introducing the 'genetic parameter vector' of the unitary transformations to be found. In the genetic algorithm process, all components of the genetic parameter vectors are supposed to evolve to the solution parameters of the unitary transformations. We apply our method to find the optimal unitary transformations and to generalize the corresponding quantum algorithms for a realistic problem, the one-bit oracle decision problem, or the often-called Deutsch problem. By numerical simulations, we can faithfully find the appropriate unitary transformations to solve the problem by using our method. We analyze the quantum algorithms identified by the found unitary transformations and generalize the variant models of the original Deutsch's algorithm.

  14. A new generalized exponential rational function method to find exact special solutions for the resonance nonlinear Schrödinger equation

    Science.gov (United States)

    Ghanbari, Behzad; Inc, Mustafa

    2018-04-01

    The present paper suggests a novel technique to acquire exact solutions of nonlinear partial differential equations. The main idea of the method is to generalize the exponential rational function method. In order to examine the ability of the method, we consider the resonant nonlinear Schrödinger equation (R-NLSE). Many variants of exact soliton solutions for the equation are derived by the proposed method. Physical interpretations of some obtained solutions is also included. One can easily conclude that the new proposed method is very efficient and finds the exact solutions of the equation in a relatively easy way.

  15. An integrating factor matrix method to find first integrals

    International Nuclear Information System (INIS)

    Saputra, K V I; Quispel, G R W; Van Veen, L

    2010-01-01

    In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

  16. EPA flow reference method testing and analysis: Findings report

    International Nuclear Information System (INIS)

    1999-06-01

    This report describes an experimental program sponsored by the US Environmental Protection Agency (EPA) to evaluate potential improvements to the Agency's current reference method for measuring volumetric flow (Method 2, 40 CFR Part 60, Appendix B). Method 2 (Determination of Stack Gas Velocity and Volumetric Flow Rate (Type S Pitot Tube)) specifies measurements to determine volumetric flow, but does not prescribe specific procedures to account for yaw or pitch angles of flow when the flow in the stack is not axial. Method 2 also allows the use of only two probe types, the Type S and the Prandtl

  17. The Teaching of General Solution Methods to Pattern Finding Problems through Focusing on an Evaluation and Improvement Process.

    Science.gov (United States)

    Ishida, Junichi

    1997-01-01

    Examines the effects of a teaching strategy in which fifth-grade students evaluated the strengths or weaknesses of solution methods to pattern finding problems, including an experimental and control group each consisting of 34 elementary students, in Japan. The experimental group showed a significantly better performance on the retention test…

  18. A simple method for finding the scattering coefficients of quantum graphs

    International Nuclear Information System (INIS)

    Cottrell, Seth S.

    2015-01-01

    Quantum walks are roughly analogous to classical random walks, and similar to classical walks they have been used to find new (quantum) algorithms. When studying the behavior of large graphs or combinations of graphs, it is useful to find the response of a subgraph to signals of different frequencies. In doing so, we can replace an entire subgraph with a single vertex with variable scattering coefficients. In this paper, a simple technique for quickly finding the scattering coefficients of any discrete-time quantum graph will be presented. These scattering coefficients can be expressed entirely in terms of the characteristic polynomial of the graph’s time step operator. This is a marked improvement over previous techniques which have traditionally required finding eigenstates for a given eigenvalue, which is far more computationally costly. With the scattering coefficients we can easily derive the “impulse response” which is the key to predicting the response of a graph to any signal. This gives us a powerful set of tools for rapidly understanding the behavior of graphs or for reducing a large graph into its constituent subgraphs regardless of how they are connected

  19. A new method for finding the minimum free energy pathway of ions and small molecule transportation through protein based on 3D-RISM theory and the string method

    Science.gov (United States)

    Yoshida, Norio

    2018-05-01

    A new method for finding the minimum free energy pathway (MFEP) of ions and small molecule transportation through a protein based on the three-dimensional reference interaction site model (3D-RISM) theory combined with the string method has been proposed. The 3D-RISM theory produces the distribution function, or the potential of mean force (PMF), for transporting substances around the given protein structures. By applying the string method to the PMF surface, one can readily determine the MFEP on the PMF surface. The method has been applied to consider the Na+ conduction pathway of channelrhodopsin as an example.

  20. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K [Department of Radiation Oncology, NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  1. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    International Nuclear Information System (INIS)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K

    2016-01-01

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  2. Numerical methods for finding periodic points in discrete maps. High order islands chains and noble barriers in a toroidal magnetic configuration

    Energy Technology Data Exchange (ETDEWEB)

    Steinbrecher, G. [Association Euratom-Nasti Romania, Dept. of Theoretical Physics, Physics Faculty, University of Craiova (Romania); Reuss, J.D.; Misguich, J.H. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2001-11-01

    We first remind usual physical and mathematical concepts involved in the dynamics of Hamiltonian systems, and namely in chaotic systems described by discrete 2D maps (representing the intersection points of toroidal magnetic lines in a poloidal plane in situations of incomplete magnetic chaos in Tokamaks). Finding the periodic points characterizing chains of magnetic islands is an essential step not only to determine the skeleton of the phase space picture, but also to determine the flux of magnetic lines across semi-permeable barriers like Cantori. We discuss here several computational methods used to determine periodic points in N dimensions, which amounts to solve a set of N nonlinear coupled equations: Newton method, minimization techniques, Laplace or steepest descend method, conjugated direction method and Fletcher-Reeves method. We have succeeded to improve this last method in an important way, without modifying its useful double-exponential convergence. This improved method has been tested and applied to finding periodic points of high order m in the 2D 'Tokamap' mapping, for values of m along rational chains of winding number n/m converging towards a noble value where a Cantorus exists. Such precise positions of periodic points have been used in the calculation of the flux across this Cantorus. (authors)

  3. Common Subcluster Mining in Microarray Data for Molecular Biomarker Discovery.

    Science.gov (United States)

    Sadhu, Arnab; Bhattacharyya, Balaram

    2017-10-11

    Molecular biomarkers can be potential facilitators for detection of cancer at early stage which is otherwise difficult through conventional biomarkers. Gene expression data from microarray experiments on both normal and diseased cell samples provide enormous scope to explore genetic relations of disease using computational techniques. Varied patterns of expressions of thousands of genes at different cell conditions along with inherent experimental error make the task of isolating disease related genes challenging. In this paper, we present a data mining method, common subcluster mining (CSM), to discover highly perturbed genes under diseased condition from differential expression patterns. The method builds heap through superposing near centroid clusters from gene expression data of normal samples and extracts its core part. It, thus, isolates genes exhibiting the most stable state across normal samples and constitute a reference set for each centroid. It performs the same operation on datasets from corresponding diseased samples and isolates the genes showing drastic changes in their expression patterns. The method thus finds the disease-sensitive genesets when applied to datasets of lung cancer, prostrate cancer, pancreatic cancer, breast cancer, leukemia and pulmonary arterial hypertension. In majority of the cases, few new genes are found over and above some previously reported ones. Genes with distinct deviations in diseased samples are prospective candidates for molecular biomarkers of the respective disease.

  4. Dermoscopic findings in cicatricial alopecia

    Directory of Open Access Journals (Sweden)

    Seher Arı

    2013-12-01

    Full Text Available Background: Dermoscopy is an important tool for the diagnosis of pigmented skin lesions. Newly, this method has also been used in the diagnosis and follow-up hair and scalp disorders. Objective: The objective of this study was to investigate dermoscopic findings in a sample of patients with clinical and histopathological compatible with cicatricial alopecia. Methods: Twenty nine patients with cicatricial alopecia diagnosed by clinical and histological findings were examined by dermoscopy.. Results: Dermoscopic features evaluated included folliculitis decalvans (n=8, pseudopelade of Brocq (n=7, lichen planopilaris (n=6, discoid lupus erythematosus (n=2, dissecting cellulitis (n=1, and secondary cicatricial alopecia (n=5. Visualization of structures previously examined with naked eye were seen in great detail with dermoscopy. The loss of follicular orifices was seen in all patients with cicatricial alopecia. Perifollicular scaling, arborizing red lines, honeycomb pigment pattern, white dots and tufted hairs were the other most obvious findings. Conclusion: Use of dermoscopy in the clinical evaluation of cicatrical alopecia improves diagnostic capability beyond simple clinic inspection, but larger studies correlating dermoscopic findings with histopathology exams are needed to improve understanding of this method.

  5. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-01-01

    To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 degrees, respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78 +/- 0.57) mm or less. The theta and phi angle errors were found to be (5.7 +/- 4.9) degrees and (6.0 +/- 4.1) degrees, respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 degrees compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as

  6. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    Science.gov (United States)

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  7. Use of traditional and modern contraceptives among childbearing women: findings from a mixed methods study in two southwestern Nigerian states.

    Science.gov (United States)

    Ajayi, Anthony Idowu; Adeniyi, Oladele Vincent; Akpan, Wilson

    2018-05-09

    Contraceptive use has numerous health benefits such as preventing unplanned pregnancies, ensuring optimum spacing between births, reducing maternal and child mortality, and improving the lives of women and children in general. This study examines the level of contraceptive use, its determinants, reasons for non-use of contraception among women in the reproductive age group (18-49 years) in two southwestern Nigerian states. The study adopted an interviewer-administered questionnaire to collect data from 809 participants selected using a 3-stage cluster random sampling technique. We also conducted 46 in-depth interviews. In order to investigate the association between the socio-demographic variables and use of contraceptive methods, we estimated the binary logistic regression models. The findings indicated that knowledge of any methods of contraception was almost universal among the participants. The rates of ever use and current use of contraception was 80 and 66.6%, respectively. However, only 43.9% of the participants had ever used any modern contraceptive methods, considered to be more reliable. The fear of side effects of modern contraceptive methods drove women to rely on less effective traditional methods (withdrawal and rhythm methods). Some women employed crude and unproven contraceptive methods to prevent pregnancies. Our findings show that the rate of contraceptive use was high in the study setting. However, many women chose less effective traditional contraceptive methods over more effective modern contraceptive methods due to fear of side effects of the latter. Patient education on the various options of modern contraceptives, their side effects and management would be crucial towards expanding the family planning services in the study setting.

  8. Comparison of preoperative neuroradiographic findings and surgical findings in lumbar disc herniation

    International Nuclear Information System (INIS)

    Takahara, Kazuhiro; Sera, Keisuke; Nakamura, Masakazu; Uchida, Takeshi; Ito, Nobuyuki.

    1997-01-01

    Surgical findings in lumbar disc hernia were compared to pre-operative MRI, CTM and myelogram findings. Ninety-one cases were studied using Love's method. The accuracy of hernia diagnosis in MRI was 59.3%, 41.2% in CTM, and 35.2% in myelogram. At the L5/S1 disc level, the accuracy of hernia diagnosis by CTM and myelogram was decreased. MRI was useful for the diagnosis and cure of lumbar disc herniation. (author)

  9. Trajectory data privacy protection based on differential privacy mechanism

    Science.gov (United States)

    Gu, Ke; Yang, Lihao; Liu, Yongzhi; Liao, Niandong

    2018-05-01

    In this paper, we propose a trajectory data privacy protection scheme based on differential privacy mechanism. In the proposed scheme, the algorithm first selects the protected points from the user’s trajectory data; secondly, the algorithm forms the polygon according to the protected points and the adjacent and high frequent accessed points that are selected from the accessing point database, then the algorithm calculates the polygon centroids; finally, the noises are added to the polygon centroids by the differential privacy method, and the polygon centroids replace the protected points, and then the algorithm constructs and issues the new trajectory data. The experiments show that the running time of the proposed algorithms is fast, the privacy protection of the scheme is effective and the data usability of the scheme is higher.

  10. A study on high speed wavefront control algorithm for an adaptive optics system

    International Nuclear Information System (INIS)

    Park, Seung Kyu; Baik, Sung Hoon; Kim, Cheol Jung; Seo, Young Seok

    2000-01-01

    We developed a high speed control algorithm and system for measuring and correcting the wavefront distortions based on Windows operating system. To get quickly the information of wavefront distortion from the Hartman spot image, we preprocessed the image to remove background noises and extracted the centroid position by finding the center of weights. We moved finely the centroid position with sub-pixel resolution repeatedly to get the wavefront information with more enhanced resolution. We designed a differential data communication driver and an isolated analog driver to have robust system control. As the experimental results, the measurement resolution of the wavefront was 0.05 pixels and correction speed was 5Hz

  11. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    Science.gov (United States)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  12. An automated three-dimensional detection and segmentation method for touching cells by integrating concave points clustering and random walker algorithm.

    Directory of Open Access Journals (Sweden)

    Yong He

    Full Text Available Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1 concave points clustering to determine the seed points of touching cells; and 2 random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness.

  13. Comparison of preoperative neuroradiographic findings and surgical findings in lumbar disc herniation

    Energy Technology Data Exchange (ETDEWEB)

    Takahara, Kazuhiro; Sera, Keisuke; Nakamura, Masakazu; Uchida, Takeshi [Nagasaki Mitsubishi Hospital (Japan); Ito, Nobuyuki

    1997-09-01

    Surgical findings in lumbar disc hernia were compared to pre-operative MRI, CTM and myelogram findings. Ninety-one cases were studied using Love`s method. The accuracy of hernia diagnosis in MRI was 59.3%, 41.2% in CTM, and 35.2% in myelogram. At the L5/S1 disc level, the accuracy of hernia diagnosis by CTM and myelogram was decreased. MRI was useful for the diagnosis and cure of lumbar disc herniation. (author)

  14. Methods of developing core collections based on the predicted genotypic value of rice ( Oryza sativa L.).

    Science.gov (United States)

    Li, C T; Shi, C H; Wu, J G; Xu, H M; Zhang, H Z; Ren, Y L

    2004-04-01

    The selection of an appropriate sampling strategy and a clustering method is important in the construction of core collections based on predicted genotypic values in order to retain the greatest degree of genetic diversity of the initial collection. In this study, methods of developing rice core collections were evaluated based on the predicted genotypic values for 992 rice varieties with 13 quantitative traits. The genotypic values of the traits were predicted by the adjusted unbiased prediction (AUP) method. Based on the predicted genotypic values, Mahalanobis distances were calculated and employed to measure the genetic similarities among the rice varieties. Six hierarchical clustering methods, including the single linkage, median linkage, centroid, unweighted pair-group average, weighted pair-group average and flexible-beta methods, were combined with random, preferred and deviation sampling to develop 18 core collections of rice germplasm. The results show that the deviation sampling strategy in combination with the unweighted pair-group average method of hierarchical clustering retains the greatest degree of genetic diversities of the initial collection. The core collections sampled using predicted genotypic values had more genetic diversity than those based on phenotypic values.

  15. Old Wine in New Skins: The Sensitivity of Established Findings to New Methods

    Science.gov (United States)

    Foster, E. Michael; Wiley-Exley, Elizabeth; Bickman, Leonard

    2009-01-01

    Findings from an evaluation of a model system for delivering mental health services to youth were reassessed to determine the robustness of key findings to the use of methodologies unavailable to the original analysts. These analyses address a key concern about earlier findings--that the quasi-experimental design involved the comparison of two…

  16. Investigation of source location determination from Magsat magnetic anomalies: The Euler method approach

    Science.gov (United States)

    Ravat, Dhananjay

    1996-01-01

    The applicability of the Euler method of source location determination was investigated on several model situations pertinent to satellite-data scale situations as well as Magsat data of Europe. Our investigations enabled us to understand the end-member cases for which the Euler method will work with the present satellite magnetic data and also the cases for which the assumptions implicit in the Euler method will not be met by the present satellite magnetic data. These results have been presented in one invited lecture at the Indo-US workshop on Geomagnetism in Studies of the Earth's Interior in August 1994 in Pune, India, and at one presentation at the 21st General Assembly of the IUGG in July 1995 in Boulder, CO. A new method, called Anomaly Attenuation Rate (AAR) Method (based on the Euler method), was developed during this study. This method is scale-independent and is appropriate to locate centroids of semi-compact three dimensional sources of gravity and magnetic anomalies. The method was presented during 1996 Spring AGU meeting and a manuscript describing this method is being prepared for its submission to a high-ranking journal. The grant has resulted in 3 papers and presentations at national and international meetings and one manuscript of a paper (to be submitted shortly to a reputable journal).

  17. Validity studies among hierarchical methods of cluster analysis using cophenetic correlation coefficient

    International Nuclear Information System (INIS)

    Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L.

    2017-01-01

    The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

  18. Validity studies among hierarchical methods of cluster analysis using cophenetic correlation coefficient

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L., E-mail: prii.ramos@gmail.com, E-mail: camunita@ipen.br, E-mail: alapolli@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

  19. An accurate method for quantifying and analyzing copy number variation in porcine KIT by an oligonucleotide ligation assay

    Directory of Open Access Journals (Sweden)

    Cho In-Cheol

    2007-11-01

    Full Text Available Abstract Background Aside from single nucleotide polymorphisms, copy number variations (CNVs are the most important factors in susceptibility to genetic disorders because they affect expression levels of genes. In previous studies, pyrosequencing, mini-sequencing, real-time PCR, invader assays and other techniques have been used to detect CNVs. However, the higher the copy number in a genome, the more difficult it is to resolve the copies, so a more accurate method for measuring CNVs and assigning genotype is needed. Results PCR followed by a quantitative oligonucleotide ligation assay (qOLA was developed for quantifying CNVs. The accuracy and precision of the assay were evaluated for porcine KIT, which was selected as a model locus. Overall, the root mean squares of bias and standard deviation of qOLA were 2.09 and 0.45, respectively. These values are less than half of those in the published pyrosequencing assay for analyzing CNV in porcine KIT. Using a combined method of qOLA and another pyrosequencing for quantitative analysis of KIT copies with spliced forms, we confirmed the segregation of KIT alleles in 145 F1 animals with pedigree information and verified the correct assignment of genotypes. In a diagnostic test on 100 randomly sampled commercial pigs, there was perfect agreement between the genotypes obtained by grouping observations on a scatter plot and by clustering using the nearest centroid sorting method implemented in PROC FASTCLUS of the SAS package. In a test on 159 Large White pigs, there were only two discrepancies between genotypes assigned by the two clustering methods (98.7% agreement, confirming that the quantitative ligation assay established here makes genotyping possible through the accurate measurement of high KIT copy numbers (>4 per diploid genome. Moreover, the assay is sensitive enough for use on DNA from hair follicles, indicating that DNA from various sources could be used. Conclusion We have established a high

  20. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whitman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-01-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of sixteen plasma traces has been processed using this technique

  1. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whiteman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-11-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of the plasma traces has been processed with this technique

  2. A new hierarchical method to find community structure in networks

    Science.gov (United States)

    Saoud, Bilal; Moussaoui, Abdelouahab

    2018-04-01

    Community structure is very important to understand a network which represents a context. Many community detection methods have been proposed like hierarchical methods. In our study, we propose a new hierarchical method for community detection in networks based on genetic algorithm. In this method we use genetic algorithm to split a network into two networks which maximize the modularity. Each new network represents a cluster (community). Then we repeat the splitting process until we get one node at each cluster. We use the modularity function to measure the strength of the community structure found by our method, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our method are highly effective at discovering community structure in both computer-generated and real-world network data.

  3. THE SLOAN DIGITAL SKY SURVEY REVERBERATION MAPPING PROJECT: BIASES IN z  > 1.46 REDSHIFTS DUE TO QUASAR DIVERSITY

    Energy Technology Data Exchange (ETDEWEB)

    Denney, K. D.; Peterson, B. M. [Department of Astronomy, The Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Horne, Keith [SUPA Physics and Astronomy, University of St. Andrews, St. Andrews KY16 9SS (United Kingdom); Brandt, W. N.; Grier, C. J.; Trump, J. R. [Department of Astronomy and Astrophysics, 525 Davey Lab, The Pennsylvania State University, University Park, PA 16802 (United States); Ho, Luis C. [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China); Ge, J., E-mail: denney@astronomy.ohio-state.edu [Astronomy Department University of Florida 211 Bryant Space Science Center P.O. Box 112055 Gainesville, FL 32611-2055 (United States)

    2016-12-10

    We use the coadded spectra of 32 epochs of Sloan Digital Sky Survey (SDSS) Reverberation Mapping Project observations of 482 quasars with z  > 1.46 to highlight systematic biases in the SDSS- and Baryon Oscillation Spectroscopic Survey (BOSS)-pipeline redshifts due to the natural diversity of quasar properties. We investigate the characteristics of this bias by comparing the BOSS-pipeline redshifts to an estimate from the centroid of He ii λ 1640. He ii has a low equivalent width but is often well-defined in high-S/N spectra, does not suffer from self-absorption, and has a narrow component which, when present (the case for about half of our sources), produces a redshift estimate that, on average, is consistent with that determined from [O ii] to within the He ii and [O ii] centroid measurement uncertainties. The large redshift differences of ∼1000 km s{sup −1}, on average, between the BOSS-pipeline and He ii-centroid redshifts, suggest there are significant biases in a portion of BOSS quasar redshift measurements. Adopting the He ii-based redshifts shows that C iv does not exhibit a ubiquitous blueshift for all quasars, given the precision probed by our measurements. Instead, we find a distribution of C iv-centroid blueshifts across our sample, with a dynamic range that (i) is wider than that previously reported for this line, and (ii) spans C iv centroids from those consistent with the systemic redshift to those with significant blueshifts of thousands of kilometers per second. These results have significant implications for measurement and use of high-redshift quasar properties and redshifts, and studies based thereon.

  4. THE SLOAN DIGITAL SKY SURVEY REVERBERATION MAPPING PROJECT: BIASES IN z  > 1.46 REDSHIFTS DUE TO QUASAR DIVERSITY

    International Nuclear Information System (INIS)

    Denney, K. D.; Peterson, B. M.; Horne, Keith; Brandt, W. N.; Grier, C. J.; Trump, J. R.; Ho, Luis C.; Ge, J.

    2016-01-01

    We use the coadded spectra of 32 epochs of Sloan Digital Sky Survey (SDSS) Reverberation Mapping Project observations of 482 quasars with z  > 1.46 to highlight systematic biases in the SDSS- and Baryon Oscillation Spectroscopic Survey (BOSS)-pipeline redshifts due to the natural diversity of quasar properties. We investigate the characteristics of this bias by comparing the BOSS-pipeline redshifts to an estimate from the centroid of He ii λ 1640. He ii has a low equivalent width but is often well-defined in high-S/N spectra, does not suffer from self-absorption, and has a narrow component which, when present (the case for about half of our sources), produces a redshift estimate that, on average, is consistent with that determined from [O ii] to within the He ii and [O ii] centroid measurement uncertainties. The large redshift differences of ∼1000 km s −1 , on average, between the BOSS-pipeline and He ii-centroid redshifts, suggest there are significant biases in a portion of BOSS quasar redshift measurements. Adopting the He ii-based redshifts shows that C iv does not exhibit a ubiquitous blueshift for all quasars, given the precision probed by our measurements. Instead, we find a distribution of C iv-centroid blueshifts across our sample, with a dynamic range that (i) is wider than that previously reported for this line, and (ii) spans C iv centroids from those consistent with the systemic redshift to those with significant blueshifts of thousands of kilometers per second. These results have significant implications for measurement and use of high-redshift quasar properties and redshifts, and studies based thereon.

  5. Metasynthesis findings: potential versus reality.

    Science.gov (United States)

    Finfgeld-Connett, Deborah

    2014-11-01

    Early on, qualitative researchers predicted that metasynthesis research had the potential to significantly push knowledge development forward. More recently, scholars have questioned whether this is actually occurring. To examine this concern, a randomly selected sample of metasynthesis articles was systematically reviewed to identify the types of findings that have been produced. Based on this systematic examination, it appears that findings from metasynthesis investigations might not be reaching their full potential. Metasynthesis investigations frequently result in isolated findings rather than findings in relationship, and opportunities to generate research hypotheses and theoretical models are not always fully realized. With this in mind, methods for moving metasynthesis findings into relationship are discussed. © The Author(s) 2014.

  6. Methodically finding solutions of equipments for carrying out experiments in materials testing and research. Pt. 2

    International Nuclear Information System (INIS)

    Findeisen, D.; Nachtweide, D.; Kuntze, G.

    1983-01-01

    In comparison with the development of industrial products the development of test equipments is of special kind, which is demonstrated by methodical proceeding for finding solutions and by potentialities for technical design and production of test equipment engineering. Some general principles are turned out and explained by several realized examples of design belonging to the sphere of materials testing in den Federal Institute of Materials Testing (BAM) representative of other problems. User are large scientific institutes independent of university, scientific institutes as members of university just as test stands and quality control offices of industrial works. (orig.) [de

  7. Channeling and stability of laser pulses in plasmas

    International Nuclear Information System (INIS)

    Sprangle, P.; Krall, J.; Esarey, E.

    1995-01-01

    A laser pulse propagating in a plasma is found to undergo a combination of hose and modulation instabilities. The coupled equations for the laser beam envelope and centroid are derived and solved for a laser pulse of finite length propagating through either a uniform plasma or preformed plasma density channel. The laser envelope equation describes the pulse self-focusing and optical guiding in plasmas and is used to analyze the self-modulation instability. The laser centroid equation describes the transverse motion of the laser pulse (hosing) in plasmas. Significant coupling between the centroid and envelope motion as well as harmonic generation in the envelope can occur. In addition, the transverse profile of the generated wake field is strongly affected by the laser hose instability. Methods to reduce the laser hose instability are demonstrated. copyright 1995 American Institute of Physics

  8. Using network metrics to investigate football team players' connections: A pilot study

    Directory of Open Access Journals (Sweden)

    Filipe Manuel Clemente

    2014-09-01

    Full Text Available The aim of this pilot study was propose a set of network methods to measure the specific properties of football teams. These metrics were organized on "meso" and "micro" analysis levels. Five official matches of the same team on the First Portuguese Football League were analyzed. An overall of 577 offensive plays were analyzed from the five matches. From the adjacency matrices developed per each offensive play it were computed the scaled connectivity, the clustering coefficient and the centroid significance and centroid conformity. Results showed that the highest values of scaled connectivity were found in lateral defenders and central and midfielder players and the lowest values were found in the striker and goalkeeper. The highest values of clustering coefficient were generally found in midfielders and forwards. In addition, the centroid results showed that lateral and central defenders tend to be the centroid players in the attacking process. In sum, this study showed that network metrics can be a powerful tool to help coaches to understanding the specific team's properties, thus supporting decision-making and improving sports training based on match analysis.

  9. Selection of common bean genotypes for the Cerrado/Pantanal ecotone via mixed models and multivariate analysis.

    Science.gov (United States)

    Corrêa, A M; Pereira, M I S; de Abreu, H K A; Sharon, T; de Melo, C L P; Ito, M A; Teodoro, P E; Bhering, L L

    2016-10-17

    The common bean, Phaseolus vulgaris, is predominantly grown on small farms and lacks accurate genotype recommendations for specific micro-regions in Brazil. This contributes to a low national average yield. The aim of this study was to use the methods of the harmonic mean of the relative performance of genetic values (HMRPGV) and the centroid, for selecting common bean genotypes with high yield, adaptability, and stability for the Cerrado/Pantanal ecotone region in Brazil. We evaluated 11 common bean genotypes in three trials carried out in the dry season in Aquidauana in 2013, 2014, and 2015. A likelihood ratio test detected a significant interaction between genotype x year, contributing 54% to the total phenotypic variation in grain yield. The three genotypes selected by the joint analysis of genotypic values in all years (Carioca Precoce, BRS Notável, and CNFC 15875) were the same as those recommended by the HMRPGV method. Using the centroid method, genotypes BRS Notável and CNFC 15875 were considered ideal genotypes based on their high stability to unfavorable environments and high responsiveness to environmental improvement. We identified a high association between the methods of adaptability and stability used in this study. However, the use of centroid method provided a more accurate and precise recommendation of the behavior of the evaluated genotypes.

  10. Finding possible transition states of defects in silicon-carbide and alpha-iron using the dimer method

    CERN Document Server

    Gao Fei; Weber, W J; Corrales, L R; Jonsson, H

    2003-01-01

    Energetic primary recoil atoms from ion implantation or fast neutron irradiation produce isolated point defects and clusters of both vacancies and interstitials. The migration energies and mechanisms for these defects are crucial to successful multiscale modeling of microstructural evolution during ion-implantation, thermal annealing, or under irradiation over long periods of time. The dimer method is employed to search for possible transition states of interstitials and small interstitial clusters in SiC and alpha-Fe. The method uses only the first derivatives of the potential energy to find saddle points without knowledge of the final state of the transition. In SiC, the possible migration pathway for the C interstitial is found to consist of the first neighbor jump via a Si site or second neighbor jump, but the relative probability for the second neighbor jump is very low. In alpha-Fe, the possible transition states are studied as a function of interstitial cluster size, and the lowest energy barriers corr...

  11. SU-G-BRA-07: An Innovative Fiducial-Less Tracking Method for Radiation Treatment of Abdominal Tumors by Diaphragm Disparity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dick, D; Zhao, W [University of Miami, Coral Gables, Florida (United States); Wu, X [Biophysics Research Institute of America, Miami, Florida (United States)

    2016-06-15

    Purpose: To investigate the feasibility of tracking abdominal tumors without the use of gold fiducial markers Methods: In this simulation study, an abdominal 4DCT dataset, acquired previously and containing 8 phases of the breathing cycle, was used as the testing data. Two sets of DRR images (45 and 135 degrees) were generated for each phase. Three anatomical points along the lung-diaphragm interface on each of the Digital Reconstructed Radiograph(DRR) images were identified by cross-correlation. The gallbladder, which simulates the tumor, was contoured for each phase of the breathing cycle and the corresponding centroid values serve as the measured center of the tumor. A linear model was created to correlate the diaphragm’s disparity of the three identified anatomical points with the center of the tumor. To verify the established linear model, we sequentially removed one phase of the data (i.e., 3 anatomical points and the corresponding tumor center) and created new linear models with the remaining 7 phases. Then we substituted the eliminated phase data (disparities of the 3 anatomical points) into the corresponding model to compare model-generated tumor center and the measured tumor center. Results: The maximum difference between the modeled and the measured centroid values across the 8 phases were 0.72, 0.29 and 0.30 pixels in the x, y and z directions respectively, which yielded a maximum mean-squared-error value of 0.75 pixels. The outcomes of the verification process, by eliminating each phase, produced mean-squared-errors ranging from 0.41 to 1.28 pixels. Conclusion: Gold fiducial markers, requiring surgical procedures to be implanted, are conventionally used in radiation therapy. The present work shows the feasibility of a fiducial-less tracking method for localizing abdominal tumors. Through developed diaphragm disparity analysis, the established linear model was verified with clinically accepted errors. The tracking method in real time under different

  12. Aeroelastically coupled blades for vertical axis wind turbines

    Science.gov (United States)

    Paquette, Joshua; Barone, Matthew F.

    2016-02-23

    Various technologies described herein pertain to a vertical axis wind turbine blade configured to rotate about a rotation axis. The vertical axis wind turbine blade includes at least an attachment segment, a rear swept segment, and optionally, a forward swept segment. The attachment segment is contiguous with the forward swept segment, and the forward swept segment is contiguous with the rear swept segment. The attachment segment includes a first portion of a centroid axis, the forward swept segment includes a second portion of the centroid axis, and the rear swept segment includes a third portion of the centroid axis. The second portion of the centroid axis is angularly displaced ahead of the first portion of the centroid axis and the third portion of the centroid axis is angularly displaced behind the first portion of the centroid axis in the direction of rotation about the rotation axis.

  13. MR findings of chondromalacia Patella : correlation of the grade and associated lesions with arthroscopic findings

    International Nuclear Information System (INIS)

    Chung, Yon Su; Kwon, Soon Tae; Lee, Hwan Do; Kang, Yong Soo; Byun, Ki Yong; Rhee, Kwang Jin

    1998-01-01

    To assess the MR findings of chondromalacia patella and correlate the grade and associated lesions with the arthroscopic findings. Twenty-five patients with pain in the anterior part of the knee underwent fat-suppressed axial and coronal T2-weighted and T2-weighted imaging, using a 10-cm field of view, and a 5-inch general purpose coil. We retrospectively assessed these findings, and the locations, grades and associated lesions, and correlated these with arthroscopic findings. We evaluated the exact location and grade of chondromalacia patella and associated lesions, as seen on MR images. These and the arthroscopic findings showed close correlation, and in cases involving this condition, MRI is thus a useful indicator of an appropriate surgical method and plan. (author). 18 refs., 5 figs

  14. MR findings of chondromalacia Patella : correlation of the grade and associated lesions with arthroscopic findings

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Yon Su; Kwon, Soon Tae; Lee, Hwan Do; Kang, Yong Soo; Byun, Ki Yong; Rhee, Kwang Jin [Chungnam National Univ., Taejon (Korea, Republic of). Coll. of Medicine

    1998-02-01

    To assess the MR findings of chondromalacia patella and correlate the grade and associated lesions with the arthroscopic findings. Twenty-five patients with pain in the anterior part of the knee underwent fat-suppressed axial and coronal T2-weighted and T2-weighted imaging, using a 10-cm field of view, and a 5-inch general purpose coil. We retrospectively assessed these findings, and the locations, grades and associated lesions, and correlated these with arthroscopic findings. We evaluated the exact location and grade of chondromalacia patella and associated lesions, as seen on MR images. These and the arthroscopic findings showed close correlation, and in cases involving this condition, MRI is thus a useful indicator of an appropriate surgical method and plan. (author). 18 refs., 5 figs.

  15. A comparison of three speaker-intrinsic vowel formant frequency normalization algorithms for sociophonetics

    DEFF Research Database (Denmark)

    Fabricius, Anne; Watt, Dominic; Johnson, Daniel Ezra

    2009-01-01

    from RP and Aberdeen English (northeast Scotland). We conclude that, for the data examined here, the S-centroid W&F procedures performs at least as well as the two most recognized speaker-intrinsic, vowel-extrinsic, formant-intrinsic normalization methods, Lobanov's (1971) z-score procedure and Nearey......This paper evaluates a speaker-intrinsic vowel formant frequency normalization algorithm initially proposed in Watt & Fabricius (2002). We compare how well this routine, known as the S-centroid procedure, performs as a sociophonetic research tool in three ways: reducing variance in area ratios...

  16. Lifetimes in {sup 94}Zr extracted via the doppler-shift attenuation method using pγ coincidences

    Energy Technology Data Exchange (ETDEWEB)

    Prill, Sarah; Derya, Vera; Hennig, Andreas; Pickstone, Simon G.; Spieker, Mark; Vielmetter, Vera; Wilhelmy, Julius; Zilges, Andreas [Institute for Nuclear Physics, University of Cologne (Germany); Petkov, Pavel [Institute for Nuclear Physics, University of Cologne (Germany); INRNE, Bulgarian Academy of Sciences, Sofia (Bulgaria); National Institute for Physics and Nuclear Engineering, Bucharest (Romania)

    2016-07-01

    Lifetimes of excited states in {sup 94}Zr were previously measured applying the Doppler-shift attenuation method (DSAM) following the (n,n'γ) reaction.Since the two measurements were in conflict with each other, we remeasured 14 lifetimes of excited states in {sup 94}Zr in a (p,p'γ) experiment utilizing the DSAM technique. Centroid-energy shifts were extracted from proton-gated γ-ray spectra, yielding lifetime values that are independent of feeding contributions. The results were compared to the previously measured lifetimes and found to be in good agreement with the values reported, thus confirming the correction procedure introduced for the (n,n'γ) data. This contribution features our new results and introduces the (p,p'γ) DSAM technique, which is now available in Cologne.

  17. Dynamic Fuzzy Clustering Method for Decision Support in Electricity Markets Negotiation

    Directory of Open Access Journals (Sweden)

    Ricardo FAIA

    2016-10-01

    Full Text Available Artificial Intelligence (AI methods contribute to the construction of systems where there is a need to automate the tasks. They are typically used for problems that have a large response time, or when a mathematical method cannot be used to solve the problem. However, the application of AI brings an added complexity to the development of such applications. AI has been frequently applied in the power systems field, namely in Electricity Markets (EM. In this area, AI applications are essentially used to forecast / estimate the prices of electricity or to search for the best opportunity to sell the product. This paper proposes a clustering methodology that is combined with fuzzy logic in order to perform the estimation of EM prices. The proposed method is based on the application of a clustering methodology that groups historic energy contracts according to their prices’ similarity. The optimal number of groups is automatically calculated taking into account the preference for the balance between the estimation error and the number of groups. The centroids of each cluster are used to define a dynamic fuzzy variable that approximates the tendency of contracts’ history. The resulting fuzzy variable allows estimating expected prices for contracts instantaneously and approximating missing values in the historic contracts.

  18. Finding out who is nesting where: a method for locating nest sites of hole-nesting species prior to egg laying

    NARCIS (Netherlands)

    Grieco, F.

    2000-01-01

    A method to find out which species is more likely to start egg laying in a certain nestbox is described. Nestboxes were visited daily and the behaviour of the birds (Great, Blue and Coal Tits) that appeared around the nestbox was observed. The birds' response consisted mainly of giving alarm calls

  19. An Investigation of the Academic Information Finding and Re-finding Behavior on the Web

    Directory of Open Access Journals (Sweden)

    Hsiao-Tieh Pu

    2014-12-01

    Full Text Available Academic researchers often need and re-use relevant information found after a period of time. This preliminary study used various methods, including experiments, interviews, search log analysis, sequential analysis, and observation to investigate characteristics of academic information finding and re-finding behavior. Overall, the participants in this study entered short queries either in finding or re-finding phases. Comparatively speaking, the participants entered greater number of queries, modified more queries, browsed more web pages, and stayed longer on web pages in the finding phase. On the other hand, in the re-finding phase, they utilized personal information management tools to re-find instead of finding again using search engine, such as checking browsing history; moreover, they tend to input less number of queries and stayed shorter on web pages. In short, the participants interacted more with the retrieval system during the finding phase, while they increased the use of personal information management tools in the re-finding phase. As to the contextual clues used in re-finding phase, the participants used less clues from the target itself, instead, they used indirect clues more often, especially location-related information. Based on the results of sequential analysis, the transition states in the re-finding phase was found to be more complex than those in the finding phase. Web information finding and re-finding behavior is an important and novel area of research. The preliminary results would benefit research on Web information re-finding behavior, and provide useful suggestions for developing personal academic information management systems. [Article content in Chinese

  20. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.

  1. Automatic NC-Data generation method for 5-axis cutting of turbine-blades by finding Safe heel-angles and adaptive path-intervals

    International Nuclear Information System (INIS)

    Piao, Cheng Dao; Lee, Cheol Soo; Cho, Kyu Zong; Park, Gwang Ryeol

    2004-01-01

    In this paper, an efficient method for generating 5-axis cutting data for a turbine blade is presented. The interference elimination of 5-axis cutting currently is very complicated, and it takes up a lot of time. The proposed method can generate an interference-free tool path, within an allowance range. Generating the cutting data just point to the cutting process and using it to obtain NC data by calculating the feed rate, allows us to maintain the proper feed rate of the 5-axis machine. This paper includes the algorithms for: (1) CL data generation by detecting an interference-free heel angle, (2) finding the optimal tool path interval considering the cusp-height, (3) finding the adaptive feed rate values for each cutter path, and (4) the inverse kinematics depending on the structure of the 5-axis machine, for generating the NC data

  2. An Efficient Mesh Generation Method for Fractured Network System Based on Dynamic Grid Deformation

    Directory of Open Access Journals (Sweden)

    Shuli Sun

    2013-01-01

    Full Text Available Meshing quality of the discrete model influences the accuracy, convergence, and efficiency of the solution for fractured network system in geological problem. However, modeling and meshing of such a fractured network system are usually tedious and difficult due to geometric complexity of the computational domain induced by existence and extension of fractures. The traditional meshing method to deal with fractures usually involves boundary recovery operation based on topological transformation, which relies on many complicated techniques and skills. This paper presents an alternative and efficient approach for meshing fractured network system. The method firstly presets points on fractures and then performs Delaunay triangulation to obtain preliminary mesh by point-by-point centroid insertion algorithm. Then the fractures are exactly recovered by local correction with revised dynamic grid deformation approach. Smoothing algorithm is finally applied to improve the quality of mesh. The proposed approach is efficient, easy to implement, and applicable to the cases of initial existing fractures and extension of fractures. The method is successfully applied to modeling of two- and three-dimensional discrete fractured network (DFN system in geological problems to demonstrate its effectiveness and high efficiency.

  3. Dynamic and static strain gauge using superimposed fiber Bragg gratings

    International Nuclear Information System (INIS)

    Ma, Y C; Yang, Y H; Yang, M W; Li, J M; Tang, J; Liang, T

    2012-01-01

    This paper demonstrates a simple and fast interrogation method for the dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to decrease nonequidistant space of generated a sensing pulse train in a time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A four times increase in the interrogation speed of dynamic strain, by generating a 2 kHz optical sensing pulse train from a 500 Hz scanning frequency, is demonstrated experimentally. The interrogation uncertainty and total harmonic distortion characterization of superimposed FBGs are tested and less than 4 pm standard deviation is obtained. (paper)

  4. The functional variable method for finding exact solutions of some ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...

  5. Dynamics of the off axis intense beam propagation in a spiral inflector

    Energy Technology Data Exchange (ETDEWEB)

    Goswami, A., E-mail: animesh@vecc.gov.in; Sing Babu, P., E-mail: psb@vecc.gov.in; Pandit, V.S., E-mail: pandit@vecc.gov.in

    2017-01-01

    In this paper the dynamics of space charge dominated beam in a spiral inflector is discussed by developing equations of motion for centroid and beam envelope for the off axis beam propagation. Evolution of the beam centroid and beam envelope is studied as a function of the beam current for various input beam parameters. The transmission of beam through the inflector is also estimated as a function of the beam current for an on axis and off axis beam by tracking a large number of particles. Simulation studies show that shift of the centroid from the axis at the inflector entrance affects the centroid location at the exit of the inflector and causes reduction in the beam transmission. The centroid shift at the entrance in the horizontal plane (h plane) is more critical as it affects the centroid shift in the vertical plane (u plane) by a large amount near the inflector exit where the available aperture is small. The beam transmission is found to reduce with increase in the centroid shift as well as with the beam current.

  6. Literature Shows Recurring Efforts at Finding a Method of Teaching Ethics.

    Science.gov (United States)

    Knudtson, Judy

    1994-01-01

    Reviews classroom journalism texts and articles published in "Quill and Scroll" and this journal since the early 1970s. Notes that the topic of ethics has been addressed recurrently. Finds a wealth of commitment to and concern for ethical issues for high school journalists. (RS)

  7. How processing digital elevation models can affect simulated water budgets

    Science.gov (United States)

    Kuniansky, E.L.; Lowery, M.A.; Campbell, B.G.

    2009-01-01

    For regional models, the shallow water table surface is often used as a source/sink boundary condition, as model grid scale precludes simulation of the water table aquifer. This approach is appropriate when the water table surface is relatively stationary. Since water table surface maps are not readily available, the elevation of the water table used in model cells is estimated via a two-step process. First, a regression equation is developed using existing land and water table elevations from wells in the area. This equation is then used to predict the water table surface for each model cell using land surface elevation available from digital elevation models (DEM). Two methods of processing DEM for estimating the land surface for each cell are commonly used (value nearest the cell centroid or mean value in the cell). This article demonstrates how these two methods of DEM processing can affect the simulated water budget. For the example presented, approximately 20% more total flow through the aquifer system is simulated if the centroid value rather than the mean value is used. This is due to the one-third greater average ground water gradients associated with the centroid value than the mean value. The results will vary depending on the particular model area topography and cell size. The use of the mean DEM value in each model cell will result in a more conservative water budget and is more appropriate because the model cell water table value should be representative of the entire cell area, not the centroid of the model cell.

  8. Improving experimental phases for strong reflections prior to density modification

    International Nuclear Information System (INIS)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; Read, Randy J.

    2013-01-01

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography

  9. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  10. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  11. Comparison Of Keyword Based Clustering Of Web Documents By Using Openstack 4j And By Traditional Method

    Directory of Open Access Journals (Sweden)

    Shiza Anand

    2015-08-01

    Full Text Available As the number of hypertext documents are increasing continuously day by day on world wide web. Therefore clustering methods will be required to bind documents into the clusters repositories according to the similarity lying between the documents. Various clustering methods exist such as Hierarchical Based K-means Fuzzy Logic Based Centroid Based etc. These keyword based clustering methods takes much more amount of time for creating containers and putting documents in their respective containers. These traditional methods use File Handling techniques of different programming languages for creating repositories and transferring web documents into these containers. In contrast openstack4j SDK is a new technique for creating containers and shifting web documents into these containers according to the similarity in much more less amount of time as compared to the traditional methods. Another benefit of this technique is that this SDK understands and reads all types of files such as jpg html pdf doc etc. This paper compares the time required for clustering of documents by using openstack4j and by traditional methods and suggests various search engines to adopt this technique for clustering so that they give result to the user querries in less amount of time.

  12. Hartman Testing of X-Ray Telescopes

    Science.gov (United States)

    Saha, Timo T.; Biskasch, Michael; Zhang, William W.

    2013-01-01

    Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.

  13. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Ros, A.; Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  14. Maximum likelihood positioning for gamma-ray imaging detectors with depth of interaction measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain)], E-mail: lerche@ific.uv.es; Ros, A. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain); Monzo, J.M.; Aliaga, R.J.; Ferrando, N.; Martinez, J.D.; Herrero, V.; Esteve, R.; Gadea, R.; Colom, R.J.; Toledo, J.; Mateo, F.; Sebastia, A. [Grupo de Sistemas Digitales, ITACA, Universidad Politecnica de Valencia, 46022 Valencia (Spain); Sanchez, F.; Benlloch, J.M. [Grupo de Fisica Medica Nuclear, IFIC, Universidad de Valencia-Consejo Superior de Investigaciones Cientificas, 46980 Paterna (Spain)

    2009-06-01

    The center of gravity algorithm leads to strong artifacts for gamma-ray imaging detectors that are based on monolithic scintillation crystals and position sensitive photo-detectors. This is a consequence of using the centroids as position estimates. The fact that charge division circuits can also be used to compute the standard deviation of the scintillation light distribution opens a way out of this drawback. We studied the feasibility of maximum likelihood estimation for computing the true gamma-ray photo-conversion position from the centroids and the standard deviation of the light distribution. The method was evaluated on a test detector that consists of the position sensitive photomultiplier tube H8500 and a monolithic LSO crystal (42mmx42mmx10mm). Spatial resolution was measured for the centroids and the maximum likelihood estimates. The results suggest that the maximum likelihood positioning is feasible and partially removes the strong artifacts of the center of gravity algorithm.

  15. A new efficient mixture screening design for optimization of media.

    Science.gov (United States)

    Rispoli, Fred; Shah, Vishal

    2009-01-01

    Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  16. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  17. Evidence-informed health policy 1 - synthesis of findings from a multi-method study of organizations that support the use of research evidence.

    Science.gov (United States)

    Lavis, John N; Oxman, Andrew D; Moynihan, Ray; Paulsen, Elizabeth J

    2008-12-17

    Organizations have been established in many countries and internationally to support the use of research evidence by producing clinical practice guidelines, undertaking health technology assessments, and/or directly supporting the use of research evidence in developing health policy on an international, national, and state or provincial level. Learning from these organizations can reduce the need to 'reinvent the wheel' and inform decisions about how best to organize support for such organizations, particularly in low- and middle-income countries (LMICs). We undertook a multi-method study in three phases - a survey, interviews, and case descriptions that drew on site visits - and in each of the second and third phases we focused on a purposive sample of those involved in the previous phase. We used the seven main recommendations that emerged from the advice offered in the interviews to organize much of the synthesis of findings across phases and methods. We used a constant comparative method to identify themes from across phases and methods. Seven recommendations emerged for those involved in establishing or leading organizations that support the use of research evidence in developing health policy: 1) collaborate with other organizations; 2) establish strong links with policymakers and involve stakeholders in the work; 3) be independent and manage conflicts of interest among those involved in the work; 4) build capacity among those working in the organization; 5) use good methods and be transparent in the work; 6) start small, have a clear audience and scope, and address important questions; and 7) be attentive to implementation considerations, even if implementation is not a remit. Four recommendations emerged for the World Health Organization (WHO) and other international organizations and networks: 1) support collaborations among organizations; 2) support local adaptation efforts; 3) mobilize support; and 4) create global public goods. This synthesis of

  18. Establishment and evaluation of sex determination method from 12 th thoracic vertebrae based on three-dimensional reconstructed models

    International Nuclear Information System (INIS)

    Guo Ming; Hou Weibin; Jing Yue; Li Youqiong; Zhou Jianying

    2011-01-01

    Objective: To establish the method of using the 12 th vertebrae for sex determination of adult Chinese and evaluate its effect. Methods: The 12 th thoracic vertebrae were developed by the clinic abdomen CT images. A total of 25 linear measurements on 7 aspects of the vertebrae were measured and 4 ratios were calculated. The items were selected which had the significant difference to establish the sex determination equation and its effect was evaluated. Results: Of the total 29 traits, 27 were sexually dimorphic (P<0.05), the accuracy was 56.3% - 89.2%, 8 traits had the accuracies mean or over 80.0%. The trait iVL had the highest accuracy of 89.2%. A function with four variables predicting sex with 90.8% accuracy was derived by using stepwise method of discriminant function analysis: Y=2.98 x iBDsm + 1.97 x PH + 3.37 x BHp + 3.27 x sVL/BHa - 32.80 (mean centroids= -7.69). Conclusion: The method of using the selected traits for sex determination of adult Chinese is practicable and it has a relatively high accuracy. (authors)

  19. School Locations and Traffic Emissions — Environmental (InJustice Findings Using a New Screening Method

    Directory of Open Access Journals (Sweden)

    Philine Gaffron

    2015-02-01

    Full Text Available It has been shown that the location of schools near heavily trafficked roads can have detrimental effects on the health of children attending those schools. It is therefore desirable to screen both existing school locations and potential new school sites to assess either the need for remedial measures or suitability for the intended use. Current screening tools and public guidance on school siting are either too coarse in their spatial resolution for assessing individual sites or are highly resource intensive in their execution (e.g., through dispersion modeling. We propose a new method to help bridge the gap between these two approaches. Using this method, we also examine the public K-12 schools in the Sacramento Area Council of Governments Region, California (USA from an environmental justice perspective. We find that PM2.5 emissions from road traffic affecting a school site are significantly positively correlated with the following metrics: percent share of Black, Hispanic and multi-ethnic students, percent share of students eligible for subsidized meals. The emissions metric correlates negatively with the schools’ Academic Performance Index, the share of White students and average parental education levels. Our PM2.5 metric also correlates with the traffic related, census tract level screening indicators from the California Communities Environmental Health Screening Tool and the tool’s tract level rate of asthma related emergency department visits.

  20. THE USE OF THE PATENT ANALYSIS METHOD FOR FINDING ANALOGUES AND PROTOTYPES OF RECEIVED TECHNICAL SOLUTIONS

    Directory of Open Access Journals (Sweden)

    Irina Petrova

    2016-03-01

    Full Text Available The research deals with the issue of the patent analysis efficiency, which is a necessary stage of seaching analogues and prototypes to obtain technical solutions. The article presents the results of analyzing the present automation systems for finding necessary information in the patent databases and identifies their advantages and disadvantages. It gives a description of the “Intellect” system, which is an example of software systems for the conceptual design stage support. Materials and Methods The article presents some of the possible ways to organize the patents-analogues search process and specific features of searching analogues and prototypes for the generated parametric structure scheme of the technical solution, which is the result of the synthesis of a new information-measuring and control system element in the “Intellect” system. The description of the proposed search query forming method is given. The article gives the structure of the patent passport, which must be stored in a database to organize the process of searcing analogues and prototypes. There given a description of algorithms for automatic adding a patent to the database, recalculating the weights while adding a patent by experts, identifying the fact of using different physical and technical effects in a patent. Results The final part of the article contains an example of the results of testing the developed subsystem implementing the proposed method. According to the test results it is concluded that the selected software and algorithmic solutions are effective.

  1. Performance Analysis of Entropy Methods on K Means in Clustering Process

    Science.gov (United States)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  2. Precise Composition Tailoring of Mixed-Cation Hybrid Perovskites for Efficient Solar Cells by Mixture Design Methods.

    Science.gov (United States)

    Li, Liang; Liu, Na; Xu, Ziqi; Chen, Qi; Wang, Xindong; Zhou, Huanping

    2017-09-26

    Mixed anion/cation perovskites absorber has been recently implemented to construct highly efficient single junction solar cells and tandem devices. However, considerable efforts are still required to map the composition-property relationship of the mixed perovskites absorber, which is essential to facilitate device design. Here we report the intensive exploration of mixed-cation perovskites in their compositional space with the assistance of a rational mixture design (MD) methods. Different from the previous linear search of the cation ratios, it is found that by employing the MD methods, the ternary composition can be tuned simultaneously following simplex lattice designs or simplex-centroid designs, which enable significantly reduced experiment/sampling size to unveil the composition-property relationship for mixed perovskite materials and to boost the resultant device efficiency. We illustrated the composition-property relationship of the mixed perovskites in multidimension and achieved an optimized power conversion efficiency of 20.99% in the corresponding device. Moreover, the method is demonstrated to be feasible to help adjust the bandgap through rational materials design, which can be further extended to other materials systems, not limited in polycrystalline perovskites films for photovoltaic applications only.

  3. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  4. Functional Imaging of Autonomic Regulation: Methods and Key Findings

    Directory of Open Access Journals (Sweden)

    Paul M Macey

    2016-01-01

    Full Text Available Central nervous system processing of autonomic function involves a network of regions throughout the brain which can be visualized and measured with neuroimaging techniques, notably functional magnetic resonance imaging (fMRI. The development of fMRI procedures has both confirmed and extended earlier findings from animal models, and human stroke and lesion studies. Assessments with fMRI can elucidate interactions between different central sites in regulating normal autonomic patterning, and demonstrate how disturbed systems can interact to produce aberrant regulation during autonomic challenges. Understanding autonomic dysfunction in various illnesses reveals mechanisms that potentially lead to interventions in the impairments. The objectives here are to: 1 describe the fMRI neuroimaging methodology for assessment of autonomic neural control, 2 outline the widespread, lateralized distribution of function in autonomic sites in the normal brain which includes structures from the neocortex through the medulla and cerebellum, 3 illustrate the importance of the time course of neural changes when coordinating responses, and how those patterns are impacted in conditions of sleep-disordered breathing, and 4 highlight opportunities for future research studies with emerging methodologies. Methodological considerations specific to autonomic testing include timing of challenges relative to the underlying fMRI signal, spatial resolution sufficient to identify autonomic brainstem nuclei, blood pressure and blood oxygenation influences on the fMRI signal, and the sustained timing, often measured in minutes of challenge periods and recovery. Key findings include the lateralized nature of autonomic organization, which is reminiscent of asymmetric motor, sensory and language pathways. Testing brain function during autonomic challenges demonstrate closely-integrated timing of responses in connected brain areas during autonomic challenges, and the involvement with

  5. Methods and findings of the SNR study

    International Nuclear Information System (INIS)

    Koeberlein, K.; Schaefer, H.; Spindler, H.

    1983-01-01

    A featfinding committee of the German Federal Parliament in July 1980 recommended to perform a ''risk-oriented study'' of the SNR-300, the German 300 MW fast breeder prototype reactor being under construction in Kalkar. The main aim of this study was to allow a comparative safety evaluation between the SNR-300 and a modern PWR, thus to prepare a basis for a political decision on the SNR-300. Methods and main results of the study are presented in this paper. In the first step of the risk analysis six groups of accidents have been identified which may initiate core destruction. These groups comprise all conceivable courses, potentially leading to core destruction. By reliability analyses, expected frequency of each group has been calculated. In the accident analysis potential failure modes of the reactor tank have been investigated. Core destruction may be accompanied by the release of significant amounts of mechanical energy. The primary coolant system of SNR-300 is designed to withstand mechanical energy releases up to 370 MJ. Design features make it possible to cool the molten core inside the reactor tank. (orig./RW) [de

  6. Neuroimaging findings in movement disorders

    International Nuclear Information System (INIS)

    Topalov, N.

    2015-01-01

    Full text: Neuroimaging methods are of great importance for the differential diagnostic delimitation of movement disorders associated with structural damage (neoplasms, ischemic lesions, neuroinfections) from those associated with specific pathophysiological mechanisms (dysmetabolic disorders, neurotransmitter disorders). Learning objective: Presentation of typical imaging findings contributing to nosological differentiation in groups of movement disorders with similar clinical signs. In this presentation are discussed neuroimaging findings in Parkinson‘s disease, atypical parkinsonian syndromes (multiple system atrophy, progressive supranuclear palsy, corticobasal degeneration), parkinsonism in genetically mediated diseases (Wilson’s disease, pantothenate kinase-associated neurodegeneration – PKAN), vascular parkinsonism, hyperkinetic movement disorders (palatal tremor, Huntington‘s chorea, symptomatic chorea in ischemic stroke and diabetes, rubral tremor, ballismus, hemifacial spasm). Contemporary neuroimaging methods enable support for diagnostic and differential diagnostic precision of a number of hypo- and hyperkinetic movement disorders, which is essential for neurological clinical practice

  7. Publication Voting Power (PVP): method of finding Evidence-Support

    African Journals Online (AJOL)

    Background: Extracting the best evidence that support a procedure is a difficult, time consuming task that needs expert statistical knowledge. A way to make weighting evidence more simple and straight for busy clinicians is needed. Methods: The publications about the procedure under question are lined in an ascending ...

  8. Quality of life in child and adolescent illness: concepts, methods, and findings

    National Research Council Canada - National Science Library

    Koot, Hans M; Wallander, Jan Lance

    2001-01-01

    ... and Findings Edited by Hans M. Koot Erasmus University Rotterdam, The Netherlands and Jan L. Wallander University of Alabama, Birmingham, USA I ~ ~~o~~~n~~~up LONDON...

  9. Findings, theories and methods in the study of children's national identifications and national attitudes

    NARCIS (Netherlands)

    Barrett, M.; Oppenheimer, L.

    2011-01-01

    This paper reviews some of the relevant background findings against which the empirical studies reported in this special issue were designed. Particular attention is given to previous findings on the development of children’s national knowledge, national attitudes and national identifications. The

  10. CT findings of colonic diverticulitis

    International Nuclear Information System (INIS)

    Sasaki, Shigeru; Ohba, Satoru; Mizutani, Masaru

    1998-01-01

    Although colonic diverticulitis has no indication for operation, but in some mistaken cases were operated with a diagnosis of acute appendicitis. We evaluated the CT findings of colonic diverticulitis about 19 cases and of asymptomatic colonic diverticula about 15 cases retrospectively. Diagnosis was confirmed of barium enema and operation. CT are complementary methods of examination that can delineated the range of thickening of the colon and the extension of inflammatory changes around the colon. We also believe that CT findings of colonic diverticulitis are useful for differentiating from a diagnosis of appendicitis. (author)

  11. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    Science.gov (United States)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  12. When are night shifts effective for nursing student clinical learning? Findings from a mixed-method study design.

    Science.gov (United States)

    Palese, Alvisa; Basso, Felix; Del Negro, Elena; Achil, Illarj; Ferraresi, Annamaria; Morandini, Marzia; Moreale, Renzo; Mansutti, Irene

    2017-05-01

    Some nursing programmes offer night shifts for students while others do not, mainly due to the lack of evidence regarding their effectiveness on clinical learning. The principal aims of the study were to describe nursing students' perceptions and to explore conditions influencing effectiveness on learning processes during night shifts. An explanatory mixed-method study design composed of a cross-sectional study (primary method, first phase) followed by a descriptive phenomenological study design (secondary method, second phase) in 2015. Two bachelor of nursing degree programmes located in Northern Italy, three years in length and requiring night shifts for students starting in the second semester of the 1st year, were involved. First phase: all nursing students ending their last clinical placement of the academic year attended were eligible; 352 out the 370 participated. Second phase: a purposeful sample of nine students among those included in the first phase and who attended the highest amount of night shifts were interviewed. First phase: a questionnaire composed of closed and open-ended questions was adopted; data was analyzed through descriptive statistical methods. Second phase: an open-ended face-to-face audio-recorded interview was adopted and data was analyzed through content analysis. Findings from the quantitative phase, showed that students who attended night shifts reported satisfaction (44.7%) less frequently than those who attended only day shifts (55.9%). They also reported boredom (23.5%) significantly more often compared to day shift students (p=0001). Understanding of the nursing role and learning competence was significantly inferior among night shift students as compared to day shift students, while the perception of wasting time was significantly higher among night shift students compared to their counterparts. Night shift students performed nursing rounds (288; 98.2%), non-nursing tasks (247; 84.3%) and/or less often managed clinical problems

  13. Leucosome distribution in migmatitic paragneisses and orthogneisses: A record of self-organized melt migration and entrapment in a heterogeneous partially-molten crust

    Science.gov (United States)

    Yakymchuk, C.; Brown, M.; Ivanic, T. J.; Korhonen, F. J.

    2013-09-01

    The depth to the bottom of the magnetic sources (DBMS) has been estimated from the aeromagnetic data of Central India. The conventional centroid method of DBMS estimation assumes random uniform uncorrelated distribution of sources and to overcome this limitation a modified centroid method based on scaling distribution has been proposed. Shallower values of the DBMS are found for the south western region. The DBMS values are found as low as 22 km in the south west Deccan trap covered regions and as deep as 43 km in the Chhattisgarh Basin. In most of the places DBMS are much shallower than the Moho depth, earlier found from the seismic study and may be representing the thermal/compositional/petrological boundaries. The large variation in the DBMS indicates the complex nature of the Indian crust.

  14. Novel method of finding extreme edges in a convex set of N-dimension vectors

    Science.gov (United States)

    Hu, Chia-Lun J.

    2001-11-01

    As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.

  15. Are judgments a form of data clustering? Reexamining contrast effects with the k-means algorithm.

    Science.gov (United States)

    Boillaud, Eric; Molina, Guylaine

    2015-04-01

    A number of theories have been proposed to explain in precise mathematical terms how statistical parameters and sequential properties of stimulus distributions affect category ratings. Various contextual factors such as the mean, the midrange, and the median of the stimuli; the stimulus range; the percentile rank of each stimulus; and the order of appearance have been assumed to influence judgmental contrast. A data clustering reinterpretation of judgmental relativity is offered wherein the influence of the initial choice of centroids on judgmental contrast involves 2 combined frequency and consistency tendencies. Accounts of the k-means algorithm are provided, showing good agreement with effects observed on multiple distribution shapes and with a variety of interaction effects relating to the number of stimuli, the number of response categories, and the method of skewing. Experiment 1 demonstrates that centroid initialization accounts for contrast effects obtained with stretched distributions. Experiment 2 demonstrates that the iterative convergence inherent to the k-means algorithm accounts for the contrast reduction observed across repeated blocks of trials. The concept of within-cluster variance minimization is discussed, as is the applicability of a backward k-means calculation method for inferring, from empirical data, the values of the centroids that would serve as a representation of the judgmental context. (c) 2015 APA, all rights reserved.

  16. Asymmetry in some common assignment algorithms: the dispersion factor solution

    OpenAIRE

    T de la Barra; B Pérez

    1986-01-01

    Many common assignment algorithms are based on Dial's original design to determine the paths that trip makers will follow from a given origin to destination centroids. The purpose of this paper is to show that the rules that have to be applied result in two unwanted properties. The first is that trips assigned from an origin centroid i to a destination j can be dramatically different to those resulting from centroid j to centroid i , even if the number of trips is the same and the network is ...

  17. Is the Number of Different MRI Findings More Strongly Associated with Low Back Pain Than Single MRI Findings?

    DEFF Research Database (Denmark)

    Hancock, Mark J; Kjaer, Per; Kent, Peter

    2017-01-01

    STUDY DESIGN: A cross-sectional and longitudinal analysis using 2 different data sets OBJECTIVE.: To investigate if the number of different MRI findings present is more strongly associated with low back pain (LBP) than single MRI findings. SUMMARY OF BACKGROUND DATA: Most previous studies have....... The outcome for the cross-sectional study was presence of LBP during the last year. The outcome for the longitudinal study was days to recurrence of activity limiting LBP. In both data sets we created an aggregate score of the number of different MRI findings present in each individual and assessed...... investigated the associations between single MRI findings and back pain rather than investigating combinations of MRI findings. If different individuals have different pathoanatomic sources contributing to their pain, then combinations of MRI findings may be more strongly associated with LBP. METHODS...

  18. Skeletonized wave-equation inversion for Q

    KAUST Repository

    Dutta, Gaurav

    2016-09-06

    A wave-equation gradient optimization method is presented that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid frequency shifts of the early-arrivals. The gradient is computed by migrating the observed traces weighted by the frequency-shift residuals. The background Q model is perturbed until the predicted and the observed traces have the same peak frequencies or the same centroid frequencies. Numerical tests show that an improved accuracy of the inverted Q model by wave-equation Q tomography (WQ) leads to a noticeable improvement in the migration image quality.

  19. Skeletonized wave-equation inversion for Q

    KAUST Repository

    Dutta, Gaurav; Schuster, Gerard T.

    2016-01-01

    A wave-equation gradient optimization method is presented that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid frequency shifts of the early-arrivals. The gradient is computed by migrating the observed traces weighted by the frequency-shift residuals. The background Q model is perturbed until the predicted and the observed traces have the same peak frequencies or the same centroid frequencies. Numerical tests show that an improved accuracy of the inverted Q model by wave-equation Q tomography (WQ) leads to a noticeable improvement in the migration image quality.

  20. Adaptability and stability of transgenic soybean lines and cultivars in ...

    African Journals Online (AJOL)

    Subsequently, genotypic adaptability and stability were evaluated by the methods of Eberhart and Russel (1966), Lin and Binns modified by Carneiro, Annicchiarico and Centroid. All methods presented partial coherence on classifying the best genotypes and allowed the identification of the transgenic lines L1 and L4, and ...

  1. The Findings from the OECD/NEA/CSNI UMS (Uncertainty Method Study)

    International Nuclear Information System (INIS)

    D'Auria, F.; Glaeser, H.

    2013-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a 'best estimate' concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI (Committee on the Safety of Nuclear Installations) of OECD/NEA (Organization for Economic Cooperation and Development / Nuclear Energy Agency), has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges. A 'bifurcation' analysis was also performed by the same research group also providing another way of interpreting the high temperature peak calculated by two of the participants. (authors)

  2. Evidence-informed health policy 1 – Synthesis of findings from a multi-method study of organizations that support the use of research evidence

    Directory of Open Access Journals (Sweden)

    Moynihan Ray

    2008-12-01

    Full Text Available Abstract Background Organizations have been established in many countries and internationally to support the use of research evidence by producing clinical practice guidelines, undertaking health technology assessments, and/or directly supporting the use of research evidence in developing health policy on an international, national, and state or provincial level. Learning from these organizations can reduce the need to 'reinvent the wheel' and inform decisions about how best to organize support for such organizations, particularly in low- and middle-income countries (LMICs. Methods We undertook a multi-method study in three phases – a survey, interviews, and case descriptions that drew on site visits – and in each of the second and third phases we focused on a purposive sample of those involved in the previous phase. We used the seven main recommendations that emerged from the advice offered in the interviews to organize much of the synthesis of findings across phases and methods. We used a constant comparative method to identify themes from across phases and methods. Results Seven recommendations emerged for those involved in establishing or leading organizations that support the use of research evidence in developing health policy: 1 collaborate with other organizations; 2 establish strong links with policymakers and involve stakeholders in the work; 3 be independent and manage conflicts of interest among those involved in the work; 4 build capacity among those working in the organization; 5 use good methods and be transparent in the work; 6 start small, have a clear audience and scope, and address important questions; and 7 be attentive to implementation considerations, even if implementation is not a remit. Four recommendations emerged for the World Health Organization (WHO and other international organizations and networks: 1 support collaborations among organizations; 2 support local adaptation efforts; 3 mobilize support; and 4 create

  3. A probabilistic sampling method (PSM for estimating geographic distance to health services when only the region of residence is known

    Directory of Open Access Journals (Sweden)

    Peek-Asa Corinne

    2011-01-01

    Full Text Available Abstract Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM, which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks within each participant's reported region (ZIP Code, weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV. We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance

  4. A probabilistic sampling method (PSM) for estimating geographic distance to health services when only the region of residence is known

    Science.gov (United States)

    2011-01-01

    Background The need to estimate the distance from an individual to a service provider is common in public health research. However, estimated distances are often imprecise and, we suspect, biased due to a lack of specific residential location data. In many cases, to protect subject confidentiality, data sets contain only a ZIP Code or a county. Results This paper describes an algorithm, known as "the probabilistic sampling method" (PSM), which was used to create a distribution of estimated distances to a health facility for a person whose region of residence was known, but for which demographic details and centroids were known for smaller areas within the region. From this distribution, the median distance is the most likely distance to the facility. The algorithm, using Monte Carlo sampling methods, drew a probabilistic sample of all the smaller areas (Census blocks) within each participant's reported region (ZIP Code), weighting these areas by the number of residents in the same age group as the participant. To test the PSM, we used data from a large cross-sectional study that screened women at a clinic for intimate partner violence (IPV). We had data on each woman's age and ZIP Code, but no precise residential address. We used the PSM to select a sample of census blocks, then calculated network distances from each census block's centroid to the closest IPV facility, resulting in a distribution of distances from these locations to the geocoded locations of known IPV services. We selected the median distance as the most likely distance traveled and computed confidence intervals that describe the shortest and longest distance within which any given percent of the distance estimates lie. We compared our results to those obtained using two other geocoding approaches. We show that one method overestimated the most likely distance and the other underestimated it. Neither of the alternative methods produced confidence intervals for the distance estimates. The algorithm

  5. Finding Rising and Falling Words

    NARCIS (Netherlands)

    Tjong Kim Sang, E.

    2016-01-01

    We examine two different methods for finding rising words (among which neologisms) and falling words (among which archaisms) in decades of magazine texts (millions of words) and in years of tweets (billions of words): one based on correlation coefficients of relative frequencies and time, and one

  6. The radiographic findings of adult congenital megacolon disease

    International Nuclear Information System (INIS)

    Deng Xiaotao; Yu Jingying; Zhang Yongchun

    2000-01-01

    Objective: To describe the radiographic findings of adult megacolon. Methods: Barium enema examination was performed in 6 patients with megacolon proved by operation. Results: The principal radiographic findings were a markedly dilated colon, the largest diameter was 22 cm, and a narrowed rectum, its length was 3-7 cm; with a cone or funnel shaped transitional segment, it is about 2-6 cm long. Conclusion: The barium enema examination is the most reliable and simple method in diagnosing adult congenital megacolon

  7. Double Gamow-Teller Transitions and its Relation to Neutrinoless β β Decay

    Science.gov (United States)

    Shimizu, Noritaka; Menéndez, Javier; Yako, Kentaro

    2018-04-01

    We study the double Gamow-Teller (DGT) strength distribution of 48Ca with state-of-the-art large-scale nuclear shell model calculations. Our analysis shows that the centroid energy of the DGT giant resonance depends mostly on the isovector pairing interaction, while the resonance width is more sensitive to isoscalar pairing. Pairing correlations are also key in neutrinoless β β (0 ν β β ) decay. We find a simple relation between the centroid energy of the 48Ca DGT giant resonance and the 0 ν β β decay nuclear matrix element. More generally, we observe a very good linear correlation between the DGT transition to the ground state of the final nucleus and the 0 ν β β decay matrix element. The correlation, which originates on the dominant short-range character of both transitions, extends to heavier systems including several β β emitters and also holds in energy-density functional results. Our findings suggest that DGT experiments can be a very valuable tool to obtain information on the value of 0 ν β β decay nuclear matrix elements.

  8. Electroencephalographic findings in panic disorder

    Directory of Open Access Journals (Sweden)

    Marcele Regine de Carvalho

    2013-12-01

    Full Text Available Some studies have reported the importance of electroencephalography (EEG as a method for investigating abnormal parameters in psychiatric disorders. Different findings in time and frequency domain analysis with regard to central nervous system arousal during acute panic states have already been obtained. This study aimed to systematically review the EEG findings in panic disorder (PD, discuss them having a currently accepted neuroanatomical hypothesis for this pathology as a basis, and identify limitations in the selected studies. Literature search was conducted in the databases PubMed and ISI Web of Knowledge, using the keywords electroencephalography and panic disorder; 16 articles were selected. Despite the inconsistency of EEG findings in PD, the major conclusions about the absolute power of alpha and beta bands point to a decreased alpha power, while beta power tends to increase. Different asymmetry patterns were found between studies. Coherence studies pointed to a lower degree of inter-hemispheric functional connectivity at the frontal region and intra-hemispheric at the bilateral temporal region. Studies on possible related events showed changes in memory processing in PD patients when exposed to aversive stimuli. It was noticed that most findings reflect the current neurobiological hypothesis of PD, where inhibitory deficits of the prefrontal cortex related to the modulation of amygdala activity, and the subsequent activation of subcortical regions, may be responsible to trigger anxiety responses. We approached some important issues that need to be considered in further researches, especially the use of different methods for analyzing EEG signals. Keywords: Electroencephalography, panic disorder, neurobiology, brain mapping.

  9. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data: SPSS TwoStep Cluster analysis, Latent Gold and SNOB.

    Science.gov (United States)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-10-02

    There are various methodological approaches to identifying clinically important subgroups and one method is to identify clusters of characteristics that differentiate people in cross-sectional and/or longitudinal data using Cluster Analysis (CA) or Latent Class Analysis (LCA). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold LCA and SNOB LCA). The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program's ease of use and interpretability of the presentation of results.We analysed five real datasets of varying complexity in a secondary analysis of data from other research projects. Three datasets contained only MRI findings (n = 2,060 to 20,810 vertebral disc levels), one dataset contained only pain intensity data collected for 52 weeks by text (SMS) messaging (n = 1,121 people), and the last dataset contained a range of clinical variables measured in low back pain patients (n = 543 people). Four artificial datasets (n = 1,000 each) containing subgroups of varying complexity were also analysed testing the ability of these clustering methods to detect subgroups and correctly classify individuals when subgroup membership was known. The results from the real clinical datasets indicated that the number of subgroups detected varied, the certainty of classifying individuals into those subgroups varied, the findings had perfect reproducibility, some programs were easier to use and the interpretability of the presentation of their findings also varied. The results from the artificial datasets

  10. Radiological findings after gastrectomy

    Energy Technology Data Exchange (ETDEWEB)

    Riedl, P.; Polterauer, P.; Funovics, J.

    1980-06-01

    In 63 patients after total gastrectomy and reconstruction of the small bowel described by Beal-Longmire, Roux and Tomoda radiological findings were correlated with clinical symptoms. No correlation could be found between clinical symptoms of dumping and oesophagitis caused by reflux on one side and increased length of intestinal transit time, increased diameter of intestinal loops and gastro-oesophageal reflux on the other side. Enlarged blind loops after termino-lateral oesophago-jejunostomy and insufficient ligations (operation technique by Tomoda) were correlated with higher incidence of pains. Patients operated by the method of Beal-Longmire and Roux showed better results than those operated with the method of Tomoda.

  11. Spitzer Secondary Eclipse Depths with Multiple Intrapixel Sensitivity Correction Methods Observations of WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b

    Science.gov (United States)

    Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.

    2017-01-01

    We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.

  12. SPITZER SECONDARY ECLIPSE DEPTHS WITH MULTIPLE INTRAPIXEL SENSITIVITY CORRECTION METHODS OBSERVATIONS OF WASP-13b, WASP-15b, WASP-16b, WASP-62b, AND HAT-P-22b

    Energy Technology Data Exchange (ETDEWEB)

    Kilpatrick, Brian M.; Tucker, Gregory S. [Department of Physics, Box 1843, Brown University, Providence, RI 02904 (United States); Lewis, Nikole K. [Space Telescope Science Institute, Baltimore, MD 21218 (United States); Kataria, Tiffany [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Deming, Drake [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Ingalls, James G.; Krick, Jessica E., E-mail: brian_kilpatrick@brown.edu, E-mail: nlewis@stsci.org, E-mail: tiffany.kataria@jpl.nasa.gov, E-mail: ddeming@astro.umd.edu, E-mail: krick@ipac.caltech.edu [Spitzer Science Center, Infrared Processing and Analysis Center, California Institute of Technology, Mail Code 220-6, Pasadena, CA 91125 (United States)

    2017-01-01

    We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.

  13. Sonography findings in tears of the extensor pollicis longus tendon and correlation with CT, MRI and surgical findings

    International Nuclear Information System (INIS)

    Ruiz Santiago, Fernando; Garofano Plazas, Pilar; Fernandez, Juan Miguel Tristan

    2008-01-01

    We present our experience in the diagnosis of extensor pollicis longus tendon tears using different imaging methods. In the past 2 years, 12 patients (7 males, 5 females) with extension deficit of distal phalanx of thumb were diagnosed with extensor pollicis longus tendon (EPL) rupture by means of different imaging methods. The ultrasound pattern consisted of a gap between tendon stumps occupied by a continuous (eight cases) or discontinuous (four cases) attenuated hypoechoic string. In nine cases, the tendon ends were identified as a thickened stump-like structure. In the other three cases, tendon stumps were attenuated and mixed with atrophic muscle or wrist subcutaneous fat. All ultrasound findings were confirmed by CT, MR and/or surgical findings

  14. An evaluation of centrality measures used in cluster analysis

    Science.gov (United States)

    Engström, Christopher; Silvestrov, Sergei

    2014-12-01

    Clustering of data into groups of similar objects plays an important part when analysing many types of data, especially when the datasets are large as they often are in for example bioinformatics, social networks and computational linguistics. Many clustering algorithms such as K-means and some types of hierarchical clustering need a number of centroids representing the 'center' of the clusters. The choice of centroids for the initial clusters often plays an important role in the quality of the clusters. Since a data point with a high centrality supposedly lies close to the 'center' of some cluster, this can be used to assign centroids rather than through some other method such as picking them at random. Some work have been done to evaluate the use of centrality measures such as degree, betweenness and eigenvector centrality in clustering algorithms. The aim of this article is to compare and evaluate the usefulness of a number of common centrality measures such as the above mentioned and others such as PageRank and related measures.

  15. Cylinder gauge measurement using a position sensitive detector

    International Nuclear Information System (INIS)

    St John, W. Doyle

    2007-01-01

    A position sensitive detector (PSD) has been used to determine the diameter of cylindrical pins based on the shift in a laser beam's centroid. The centroid of the light beam is defined here as the weighted average of position by the local intensity. A shift can be observed in the centroid of an otherwise axially symmetric light beam, which is partially obstructed. Additionally, the maximum shift in the centroid is a unique function of the obstructing cylinder diameter. Thus to determine the cylinder diameter, one only needs to detect this maximum shift as the cylinder is swept across the beam

  16. Adaptive mixed-hybrid and penalty discontinuous Galerkin method for two-phase flow in heterogeneous media

    KAUST Repository

    Hou, Jiangyong

    2016-02-05

    In this paper, we present a hybrid method, which consists of a mixed-hybrid finite element method and a penalty discontinuous Galerkin method, for the approximation of a fractional flow formulation of a two-phase flow problem in heterogeneous media with discontinuous capillary pressure. The fractional flow formulation is comprised of a wetting phase pressure equation and a wetting phase saturation equation which are coupled through a total velocity and the saturation affected coefficients. For the wetting phase pressure equation, the continuous mixed-hybrid finite element method space can be utilized due to a fundamental property that the wetting phase pressure is continuous. While it can reduce the computational cost by using less degrees of freedom and avoiding the post-processing of velocity reconstruction, this method can also keep several good properties of the discontinuous Galerkin method, which are important to the fractional flow formulation, such as the local mass balance, continuous normal flux and capability of handling the discontinuous capillary pressure. For the wetting phase saturation equation, the penalty discontinuous Galerkin method is utilized due to its capability of handling the discontinuous jump of the wetting phase saturation. Furthermore, an adaptive algorithm for the hybrid method together with the centroidal Voronoi Delaunay triangulation technique is proposed. Five numerical examples are presented to illustrate the features of proposed numerical method, such as the optimal convergence order, the accurate and efficient velocity approximation, and the applicability to the simulation of water flooding in oil field and the oil-trapping or barrier effect phenomena.

  17. Adaptive mixed-hybrid and penalty discontinuous Galerkin method for two-phase flow in heterogeneous media

    KAUST Repository

    Hou, Jiangyong; Chen, Jie; Sun, Shuyu; Chen, Zhangxin

    2016-01-01

    In this paper, we present a hybrid method, which consists of a mixed-hybrid finite element method and a penalty discontinuous Galerkin method, for the approximation of a fractional flow formulation of a two-phase flow problem in heterogeneous media with discontinuous capillary pressure. The fractional flow formulation is comprised of a wetting phase pressure equation and a wetting phase saturation equation which are coupled through a total velocity and the saturation affected coefficients. For the wetting phase pressure equation, the continuous mixed-hybrid finite element method space can be utilized due to a fundamental property that the wetting phase pressure is continuous. While it can reduce the computational cost by using less degrees of freedom and avoiding the post-processing of velocity reconstruction, this method can also keep several good properties of the discontinuous Galerkin method, which are important to the fractional flow formulation, such as the local mass balance, continuous normal flux and capability of handling the discontinuous capillary pressure. For the wetting phase saturation equation, the penalty discontinuous Galerkin method is utilized due to its capability of handling the discontinuous jump of the wetting phase saturation. Furthermore, an adaptive algorithm for the hybrid method together with the centroidal Voronoi Delaunay triangulation technique is proposed. Five numerical examples are presented to illustrate the features of proposed numerical method, such as the optimal convergence order, the accurate and efficient velocity approximation, and the applicability to the simulation of water flooding in oil field and the oil-trapping or barrier effect phenomena.

  18. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    International Nuclear Information System (INIS)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline

  19. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  20. Transition sum rules in the shell model

    Science.gov (United States)

    Lu, Yi; Johnson, Calvin W.

    2018-03-01

    An important characterization of electromagnetic and weak transitions in atomic nuclei are sum rules. We focus on the non-energy-weighted sum rule (NEWSR), or total strength, and the energy-weighted sum rule (EWSR); the ratio of the EWSR to the NEWSR is the centroid or average energy of transition strengths from an nuclear initial state to all allowed final states. These sum rules can be expressed as expectation values of operators, which in the case of the EWSR is a double commutator. While most prior applications of the double commutator have been to special cases, we derive general formulas for matrix elements of both operators in a shell model framework (occupation space), given the input matrix elements for the nuclear Hamiltonian and for the transition operator. With these new formulas, we easily evaluate centroids of transition strength functions, with no need to calculate daughter states. We apply this simple tool to a number of nuclides and demonstrate the sum rules follow smooth secular behavior as a function of initial energy, as well as compare the electric dipole (E 1 ) sum rule against the famous Thomas-Reiche-Kuhn version. We also find surprising systematic behaviors for ground-state electric quadrupole (E 2 ) centroids in the s d shell.

  1. Noncardiac findings on cardiac CT. Part II: spectrum of imaging findings.

    LENUS (Irish Health Repository)

    Killeen, Ronan P

    2012-02-01

    Cardiac computed tomography (CT) has evolved into an effective imaging technique for the evaluation of coronary artery disease in selected patients. Two distinct advantages over other noninvasive cardiac imaging methods include its ability to directly evaluate the coronary arteries and to provide a unique opportunity to evaluate for alternative diagnoses by assessing the extracardiac structures, such as the lungs and mediastinum, particularly in patients presenting with the chief symptom of acute chest pain. Some centers reconstruct a small field of view (FOV) cropped around the heart but a full FOV (from skin to skin in the area irradiated) is obtainable in the raw data of every scan so that clinically relevant noncardiac findings are identifiable. Debate in the scientific community has centered on the necessity for this large FOV. A review of noncardiac structures provides the opportunity to make alternative diagnoses that may account for the patient\\'s presentation or to detect important but clinically silent problems such as lung cancer. Critics argue that the yield of biopsy-proven cancers is low and that the follow-up of incidental noncardiac findings is expensive, resulting in increased radiation exposure and possibly unnecessary further testing. In this 2-part review we outline the issues surrounding the concept of the noncardiac read, looking for noncardiac findings on cardiac CT. Part I focused on the pros and cons for and against the practice of identifying noncardiac findings on cardiac CT. Part II illustrates the imaging spectrum of cardiac CT appearances of benign and malignant noncardiac pathology.

  2. Correlation-maximizing surrogate gene space for visual mining of gene expression patterns in developing barley endosperm tissue

    Directory of Open Access Journals (Sweden)

    Usadel Björn

    2007-05-01

    Full Text Available Abstract Background Micro- and macroarray technologies help acquire thousands of gene expression patterns covering important biological processes during plant ontogeny. Particularly, faithful visualization methods are beneficial for revealing interesting gene expression patterns and functional relationships of coexpressed genes. Such screening helps to gain deeper insights into regulatory behavior and cellular responses, as will be discussed for expression data of developing barley endosperm tissue. For that purpose, high-throughput multidimensional scaling (HiT-MDS, a recent method for similarity-preserving data embedding, is substantially refined and used for (a assessing the quality and reliability of centroid gene expression patterns, and for (b derivation of functional relationships of coexpressed genes of endosperm tissue during barley grain development (0–26 days after flowering. Results Temporal expression profiles of 4824 genes at 14 time points are faithfully embedded into two-dimensional displays. Thereby, similar shapes of coexpressed genes get closely grouped by a correlation-based similarity measure. As a main result, by using power transformation of correlation terms, a characteristic cloud of points with bipolar sandglass shape is obtained that is inherently connected to expression patterns of pre-storage, intermediate and storage phase of endosperm development. Conclusion The new HiT-MDS-2 method helps to create global views of expression patterns and to validate centroids obtained from clustering programs. Furthermore, functional gene annotation for developing endosperm barley tissue is successfully mapped to the visualization, making easy localization of major centroids of enriched functional categories possible.

  3. Finding biomedical categories in Medline®

    Directory of Open Access Journals (Sweden)

    Yeganova Lana

    2012-10-01

    Full Text Available Abstract Background There are several humanly defined ontologies relevant to Medline. However, Medline is a fast growing collection of biomedical documents which creates difficulties in updating and expanding these humanly defined ontologies. Automatically identifying meaningful categories of entities in a large text corpus is useful for information extraction, construction of machine learning features, and development of semantic representations. In this paper we describe and compare two methods for automatically learning meaningful biomedical categories in Medline. The first approach is a simple statistical method that uses part-of-speech and frequency information to extract a list of frequent nouns from Medline. The second method implements an alignment-based technique to learn frequent generic patterns that indicate a hyponymy/hypernymy relationship between a pair of noun phrases. We then apply these patterns to Medline to collect frequent hypernyms as potential biomedical categories. Results We study and compare these two alternative sets of terms to identify semantic categories in Medline. We find that both approaches produce reasonable terms as potential categories. We also find that there is a significant agreement between the two sets of terms. The overlap between the two methods improves our confidence regarding categories predicted by these independent methods. Conclusions This study is an initial attempt to extract categories that are discussed in Medline. Rather than imposing external ontologies on Medline, our methods allow categories to emerge from the text.

  4. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    Science.gov (United States)

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  5. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    Science.gov (United States)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  6. Connecting optical and X-ray tracers of galaxy cluster relaxation

    Science.gov (United States)

    Roberts, Ian D.; Parker, Laura C.; Hlavacek-Larrondo, Julie

    2018-04-01

    Substantial effort has been devoted in determining the ideal proxy for quantifying the morphology of the hot intracluster medium in clusters of galaxies. These proxies, based on X-ray emission, typically require expensive, high-quality X-ray observations making them difficult to apply to large surveys of groups and clusters. Here, we compare optical relaxation proxies with X-ray asymmetries and centroid shifts for a sample of Sloan Digital Sky Survey clusters with high-quality, archival X-ray data from Chandra and XMM-Newton. The three optical relaxation measures considered are the shape of the member-galaxy projected velocity distribution - measured by the Anderson-Darling (AD) statistic, the stellar mass gap between the most-massive and second-most-massive cluster galaxy, and the offset between the most-massive galaxy (MMG) position and the luminosity-weighted cluster centre. The AD statistic and stellar mass gap correlate significantly with X-ray relaxation proxies, with the AD statistic being the stronger correlator. Conversely, we find no evidence for a correlation between X-ray asymmetry or centroid shift and the MMG offset. High-mass clusters (Mhalo > 1014.5 M⊙) in this sample have X-ray asymmetries, centroid shifts, and Anderson-Darling statistics which are systematically larger than for low-mass systems. Finally, considering the dichotomy of Gaussian and non-Gaussian clusters (measured by the AD test), we show that the probability of being a non-Gaussian cluster correlates significantly with X-ray asymmetry but only shows a marginal correlation with centroid shift. These results confirm the shape of the radial velocity distribution as a useful proxy for cluster relaxation, which can then be applied to large redshift surveys lacking extensive X-ray coverage.

  7. Finger pressure adjustments to various object configurations during precision grip in humans and monkeys.

    Science.gov (United States)

    Viaro, Riccardo; Tia, Banty; Coudé, Gino; Canto, Rosario; Oliynyk, Andriy; Salmas, Paola; Masia, Lorenzo; Sandini, Giulio; Fadiga, Luciano

    2017-06-01

    In this study, we recorded the pressure exerted onto an object by the index finger and the thumb of the preferred hand of 18 human subjects and either hand of two macaque monkeys during a precision grasping task. The to-be-grasped object was a custom-made device composed by two plates which could be variably oriented by a motorized system while keeping constant the size and thus grip dimension. The to-be-grasped plates were covered by an array of capacitive sensors to measure specific features of finger adaptation, namely pressure intensity and centroid location and displacement. Kinematic measurements demonstrated that for human subjects and for monkeys, different plate configurations did not affect wrist velocity and grip aperture during the reaching phase. Consistently, at the instant of fingers-plates contact, pressure centroids were clustered around the same point for all handle configurations. However, small pressure centroid displacements were specifically adopted for each configuration, indicating that both humans and monkeys can display finger adaptation during precision grip. Moreover, humans applied stronger thumb pressure intensity, performed less centroid displacement and required reduced adjustment time, as compared to monkeys. These pressure patterns remain similar when different load forces were required to pull the handle, as ascertained by additional measurements in humans. The present findings indicate that, although humans and monkeys share common features in motor control of grasping, they differ in the adjustment of fingertip pressure, probably because of skill and/or morphology divergences. Such a precision grip device may form the groundwork for future studies on prehension mechanisms. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Macro-level safety analysis of pedestrian crashes in Shanghai, China.

    Science.gov (United States)

    Wang, Xuesong; Yang, Junguang; Lee, Chris; Ji, Zhuoran; You, Shikai

    2016-11-01

    Pedestrian safety has become one of the most important issues in the field of traffic safety. This study aims at investigating the association between pedestrian crash frequency and various predictor variables including roadway, socio-economic, and land-use features. The relationships were modeled using the data from 263 Traffic Analysis Zones (TAZs) within the urban area of Shanghai - the largest city in China. Since spatial correlation exists among the zonal-level data, Bayesian Conditional Autoregressive (CAR) models with seven different spatial weight features (i.e. (a) 0-1 first order, adjacency-based, (b) common boundary-length-based, (c) geometric centroid-distance-based, (d) crash-weighted centroid-distance-based, (e) land use type, adjacency-based, (f) land use intensity, adjacency-based, and (g) geometric centroid-distance-order) were developed to characterize the spatial correlations among TAZs. Model results indicated that the geometric centroid-distance-order spatial weight feature, which was introduced in macro-level safety analysis for the first time, outperformed all the other spatial weight features. Population was used as the surrogate for pedestrian exposure, and had a positive effect on pedestrian crashes. Other significant factors included length of major arterials, length of minor arterials, road density, average intersection spacing, percentage of 3-legged intersections, and area of TAZ. Pedestrian crashes were higher in TAZs with medium land use intensity than in TAZs with low and high land use intensity. Thus, higher priority should be given to TAZs with medium land use intensity to improve pedestrian safety. Overall, these findings can help transportation planners and managers understand the characteristics of pedestrian crashes and improve pedestrian safety. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    Science.gov (United States)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  10. 3D BUILDING MODELS SEGMENTATION BASED ON K-MEANS++ CLUSTER ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2016-10-01

    Full Text Available 3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  11. Finding mesoscale ocean structures with mathematical morphology

    International Nuclear Information System (INIS)

    Lea, S.M.; Lybanon, M.

    1993-01-01

    The authors introduce a technique to aid in interpreting infrared satellite images of the North Atlantic Ocean Gulf Stream region. Present interpretive methods are largely manual, require significant effort, and are highly dependent on the interpreter's skill. The quasiautomated technique is based on mathematical morphology, specifically the image transformations of opening and closing, which are defined in terms of erosion and dilation. The implementation performs successive openings and closings at increasing thresholds until a stable division into objects and background is found. This method finds the North Wall of the Gulf Stream in approximately the same place as human analysts and another automated procedure, and does less smoothing of small irregularities than the other two methods. The North Wall is continuous and sharp except where obscured by clouds. Performance in locating warm-core eddies is also comparable to the other methods. However, the present procedure does not find cold-core rings well. The authors are presently investigating ways to reduce the effects of clouds and delete the unwanted water areas found by the method. They expect to be able to improve the cold-core eddy performance

  12. Research on Language Learning Strategies: Methods, Findings, and Instructional Issues.

    Science.gov (United States)

    Oxford, Rebecca; Crookall, David

    1989-01-01

    Surveys research on formal and informal second-language learning strategies, covering the effectiveness of research methods involving making lists, interviews and thinking aloud, note-taking, diaries, surveys, and training. Suggestions for future and improved research are presented. (131 references) (CB)

  13. Beam-Based Alignment of Magnetic Field in the Fermilab Electron Cooler Cooling Section

    International Nuclear Information System (INIS)

    Seletskiy, S. M.; Tupikov, V.

    2006-01-01

    The Fermilab Electron Cooling Project requires low effective anglular spread of electrons in the cooling section. One of the main components of the effective electron angles is an angle of electron beam centroid with respect to antiproton beam. This angle is caused by the poor quality of magnetic field in the 20 m long cooling section solenoid and by the mismatch of the beam centroid to the entrance of the cooling section. This paper focuses on the beam-based procedure of the alignment of the cooling section field and beam centroid matching. The discussed procedure allows to suppress the beam centroid angles below the critical value of 0.1 mrad

  14. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-01-01

    optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through

  15. Polygons of differential equations for finding exact solutions

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.; Demina, Maria V.

    2007-01-01

    A method for finding exact solutions of nonlinear differential equations is presented. Our method is based on the application of polygons corresponding to nonlinear differential equations. It allows one to express exact solutions of the equation studied through solutions of another equation using properties of the basic equation itself. The ideas of power geometry are used and developed. Our approach has a pictorial interpretation, which is illustrative and effective. The method can be also applied for finding transformations between solutions of differential equations. To demonstrate the method application exact solutions of several equations are found. These equations are: the Korteveg-de Vries-Burgers equation, the generalized Kuramoto-Sivashinsky equation, the fourth-order nonlinear evolution equation, the fifth-order Korteveg-de Vries equation, the fifth-order modified Korteveg-de Vries equation and the sixth-order nonlinear evolution equation describing turbulent processes. Some new exact solutions of nonlinear evolution equations are given

  16. Global pathways to men's caregiving: mixed methods findings from the International Men and Gender Equality Survey and the Men Who Care study.

    Science.gov (United States)

    Kato-Wallace, Jane; Barker, Gary; Eads, Marci; Levtov, Ruti

    2014-01-01

    Promoting men's participation in unpaid care work is part of the Programme of Action for the International Conference on Population and Development. However, men's involvement in care work does not mirror the advances women have made in paid work outside the home. This mixed method study explores which men are more involved in caregiving, and what childhood and adulthood factors influence their level of involvement. Quantitative research presents findings from 1169 men across six countries with children aged 0-4, and a qualitative study presents findings from in-depth interviews with 83 men engaged in atypical caregiving practices. Survey research finds that being taught to care for children, witnessing one's father take care of one's siblings, respondents' present attitudes about gender equality and having outside help (or none, in some cases) were all also associated with men's higher level of involvement. Qualitative research reveals that men's experiences of violence, the normalisation of domestic work as children and life circumstances rather than greater-than-average beliefs in gender equality all propelled them into care work. Findings suggest that engaging more men into care work implies changes to policies and structural realities in the workplace coupled with changing gender attitudes. These insights inform policy and practice aimed at promoting greater involvement in care work by men.

  17. NCC-AUC: an AUC optimization method to identify multi-biomarker panel for cancer prognosis from genomic and clinical data.

    Science.gov (United States)

    Zou, Meng; Liu, Zhaoqi; Zhang, Xiang-Sun; Wang, Yong

    2015-10-15

    In prognosis and survival studies, an important goal is to identify multi-biomarker panels with predictive power using molecular characteristics or clinical observations. Such analysis is often challenged by censored, small-sample-size, but high-dimensional genomic profiles or clinical data. Therefore, sophisticated models and algorithms are in pressing need. In this study, we propose a novel Area Under Curve (AUC) optimization method for multi-biomarker panel identification named Nearest Centroid Classifier for AUC optimization (NCC-AUC). Our method is motived by the connection between AUC score for classification accuracy evaluation and Harrell's concordance index in survival analysis. This connection allows us to convert the survival time regression problem to a binary classification problem. Then an optimization model is formulated to directly maximize AUC and meanwhile minimize the number of selected features to construct a predictor in the nearest centroid classifier framework. NCC-AUC shows its great performance by validating both in genomic data of breast cancer and clinical data of stage IB Non-Small-Cell Lung Cancer (NSCLC). For the genomic data, NCC-AUC outperforms Support Vector Machine (SVM) and Support Vector Machine-based Recursive Feature Elimination (SVM-RFE) in classification accuracy. It tends to select a multi-biomarker panel with low average redundancy and enriched biological meanings. Also NCC-AUC is more significant in separation of low and high risk cohorts than widely used Cox model (Cox proportional-hazards regression model) and L1-Cox model (L1 penalized in Cox model). These performance gains of NCC-AUC are quite robust across 5 subtypes of breast cancer. Further in an independent clinical data, NCC-AUC outperforms SVM and SVM-RFE in predictive accuracy and is consistently better than Cox model and L1-Cox model in grouping patients into high and low risk categories. In summary, NCC-AUC provides a rigorous optimization framework to

  18. Estimation Of Young’s Modulus Of Elesticity By The Form Finding Of Grid Shell Structures By The Dynamic Relaxation Method

    Directory of Open Access Journals (Sweden)

    Grančičová Ivana

    2015-12-01

    Full Text Available The paper is basically focused on the process of form finding by the dynamic relaxation method (DRM with the aid of computational tools that enable us to make many calculations with different inputs. There are many important input values with a significant impact on the course of the calculations and the resulting displacement of a structure. One of these values is Young’s modulus of elasticity. This value has a considerable impact on the final displacement of a grid shell structure and the resulting internal forces.

  19. A Method to Automate the Segmentation of the GTV and ITV for Lung Tumors

    International Nuclear Information System (INIS)

    Ehler, Eric D.; Bzdusek, Karl; Tome, Wolfgang A.

    2009-01-01

    Four-dimensional computed tomography (4D-CT) is a useful tool in the treatment of tumors that undergo significant motion. To fully utilize 4D-CT motion information in the treatment of mobile tumors such as lung cancer, autosegmentation methods will need to be developed. Using autosegmentation tools in the Pinnacle 3 v8.1t treatment planning system, 6 anonymized 4D-CT data sets were contoured. Two test indices were developed that can be used to evaluate which autosegmentation tools to apply to a given gross tumor volume (GTV) region of interest (ROI). The 4D-CT data sets had various phase binning error levels ranging from 3% to 29%. The appropriate autosegmentation method (rigid translational image registration and deformable surface mesh) was determined to properly delineate the GTV in all of the 4D-CT phases for the 4D-CT data sets with binning errors of up to 15%. The ITV was defined by 2 methods: a mask of the GTV in all 4D-CT phases and the maximum intensity projection. The differences in centroid position and volume were compared with manual segmentation studies in literature. The indices developed in this study, along with the autosegmentation tools in the treatment planning system, were able to automatically segment the GTV in the four 4D-CTs with phase binning errors of up to 15%.

  20. Transient diagnosis system using quantum-inspired computing and Minkowski distance

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Andressa dos Santos; Schirru, Roberto, E-mail: andressa@lmp.ufrj.b, E-mail: schirru@lmp.ufrj.b [Federal University of Rio de Janeiro (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Nuclear Engineering Program

    2011-07-01

    This paper proposes a diagnosis system model for identification of transient in a PWR nuclear power plant, optimized by the Quantum Inspired Evolutionary Algorithm - QEA in order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition. This method was developed in order to be able to recognize the normal condition and three accidents of the design basis list of the nuclear power plant Angra 2, postulated in the Final Safety Analysis Report (FSAR). This System compares the similarly distance between the set of variables of the anomalous event, in a given time t, and the centroids of the design-basis transient variables. The lower similarly distance indicates the class of the transient to which the anomalous event belongs. The QEA was then used to find the best position of the centroids of each class of the selected transients. Such positions maximize the number of the correct classifications. Unlike the diagnosis system proposed in the literature, Minkowski distance was employed to calculate the similarity distance. The signatures of four transients were submitted to 1% and 2% of noise, and tested with prototype vector found by QEA. The results showed that the present transient diagnostic system was successfully implemented in the nuclear accident identification problem and was compatible with the techniques presented in the literature. (author)

  1. Transient diagnosis system using quantum-inspired computing and Minkowski distance

    International Nuclear Information System (INIS)

    Nicolau, Andressa dos Santos; Schirru, Roberto

    2011-01-01

    This paper proposes a diagnosis system model for identification of transient in a PWR nuclear power plant, optimized by the Quantum Inspired Evolutionary Algorithm - QEA in order to help nuclear power plant operator reduce his cognitive load and increase his available time to maintain the plant operating in a safe condition. This method was developed in order to be able to recognize the normal condition and three accidents of the design basis list of the nuclear power plant Angra 2, postulated in the Final Safety Analysis Report (FSAR). This System compares the similarly distance between the set of variables of the anomalous event, in a given time t, and the centroids of the design-basis transient variables. The lower similarly distance indicates the class of the transient to which the anomalous event belongs. The QEA was then used to find the best position of the centroids of each class of the selected transients. Such positions maximize the number of the correct classifications. Unlike the diagnosis system proposed in the literature, Minkowski distance was employed to calculate the similarity distance. The signatures of four transients were submitted to 1% and 2% of noise, and tested with prototype vector found by QEA. The results showed that the present transient diagnostic system was successfully implemented in the nuclear accident identification problem and was compatible with the techniques presented in the literature. (author)

  2. Radiological findings in NAO syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Al-Otaibi, Leftan; Hugosson, Claes O. [Department of Radiology, King Faisal Specialist Hospital and Research Center, Riyadh (Saudi Arabia); Al-Mayouf, Sulalman M.; Majeed, Mahmoud; Al-Eid, Wea' am; Bahabri, Sultan [Department of Paediatrics, King Faisal Specialist Hospital and Research Center, Riyadh (Saudi Arabia)

    2002-07-01

    Background: Diseases exhibiting osteolysis in children are rare hereditary conditions. Several types have been recognised with different clinical manifestations. One type includes subcutaneous nodules, arthropathy and osteolysis and has been termed NAO syndrome. Previous radiological reports have described the affected bones, usually the carpal and tarsal regions, but a detailed analysis of the radiological findings of both the axial as well as the appendicular skeleton has not been reported. Objectives: To describe the radiological findings in a large group of children with an autosomal recessive disease characterized by nodules, familial arthropathy and osteolysis. Materials and methods: The study comprises 14 patients from 9 families and all patients had the triad of nodulosis, arthropathy and osteolysis (NAO). Results: The most common radiological manifestations were osteopenia, undertubulation of long bones, arthritic changes, sclerotic sutures of the calvaria, osteolysis and muscle contractures. Other common findings were squared vertebrae, broad medial clavicles and brachycephaly. Progress of disease was documented in more than half of the patients. Conclusions: Our study is the first report of the detailed radiological findings of NAO syndrome. In NAO syndrome, both the axial and appendicular skeleton are involved (orig.)

  3. Radiological findings in NAO syndrome

    International Nuclear Information System (INIS)

    Al-Otaibi, Leftan; Hugosson, Claes O.; Al-Mayouf, Sulalman M.; Majeed, Mahmoud; Al-Eid, Wea'am; Bahabri, Sultan

    2002-01-01

    Background: Diseases exhibiting osteolysis in children are rare hereditary conditions. Several types have been recognised with different clinical manifestations. One type includes subcutaneous nodules, arthropathy and osteolysis and has been termed NAO syndrome. Previous radiological reports have described the affected bones, usually the carpal and tarsal regions, but a detailed analysis of the radiological findings of both the axial as well as the appendicular skeleton has not been reported. Objectives: To describe the radiological findings in a large group of children with an autosomal recessive disease characterized by nodules, familial arthropathy and osteolysis. Materials and methods: The study comprises 14 patients from 9 families and all patients had the triad of nodulosis, arthropathy and osteolysis (NAO). Results: The most common radiological manifestations were osteopenia, undertubulation of long bones, arthritic changes, sclerotic sutures of the calvaria, osteolysis and muscle contractures. Other common findings were squared vertebrae, broad medial clavicles and brachycephaly. Progress of disease was documented in more than half of the patients. Conclusions: Our study is the first report of the detailed radiological findings of NAO syndrome. In NAO syndrome, both the axial and appendicular skeleton are involved (orig.)

  4. Reliability analysis for manual radiographic measures of rotatory subluxation or lateral listhesis in adult scoliosis.

    Science.gov (United States)

    Freedman, Brett A; Horton, William C; Rhee, John M; Edwards, Charles C; Kuklo, Timothy R

    2009-03-15

    Retrospective observational study. To define the inter- and intraobserver reliability of 3 measures of rotatory subluxation (RS) in adult scoliosis (AS). RS is a hallmark of AS. To accurately track this measure, one must know its reliability. Reliability testing has not been performed. PA 36" films of 29 AS patients were collected from one surgeon's practice. Three observers on 2 separate occasions measured all levels with >or=3-mm RS (60 levels, 360 measurements) on the convexity of the involved segment using 3 different techniques-midbody (MB), endplate (EP), and centroid (C). These data were then analyzed to determine the intraclass correlation coefficient (ICC) for inter- and intraobserver reliability. The thoracolumbar/lumbar curve (average 58 degrees ) was the major curve for the majority (62%) of patients. RS at L3/4 was most common (35%). The overall inter- and intraobserver reliability was good-excellent for all methods, but the centroid method consistently had the highest ICC. ICC correlated with observer experience. Moderate-severe arthritic change (present in 55%) and poor image quality (52%) decreased ICC, but it still remained good-excellent for each measure. The reproducibility coefficient for each measure was 4 mm for MB and 2.8 mm for C and EP. MB, EP, and C are reliable techniques to measure RS even in elderly arthritic spines, but the methods inherently produce different values for a given level. The centroid method is most reliable and least influenced by experience. The EP method is easy to perform and very reliable. Spine surgeons should pick their preferred method and apply it consistently. Changes >3 mm suggest RS progression. RS may be a useful measure in addition to Cobb angle in AS. Having defined measurement reliability, the role of RS progression in surgical indications and patient outcomes can be evaluated.

  5. Comparison of arthoroscopic findings and high-resolution MRI using a microscopy coil findings for triangle fibrocartilage complex injury

    International Nuclear Information System (INIS)

    Satomi, Yoshiaki; Shimizu, Hiroyuki; Arai, Takeshi; Izumiyama, Kou; Beppu, Moroe

    2008-01-01

    Triangle fibrocartilage complex (TFCC) is very small and can be visualized in MRI. We compared image findings acquired by high-resolution MRI using a 47-mm-diameter microscopy coil with arthroscopic findings and reviewed the availability and possibility of application of both these techniques. The subjects were 16 patients who underwent arthroscopy of the radiocarpal joint and MRI for the diagnosis of pain in the ulnar wrist joint. Based on image evaluation, the impaired site was categorized as follows radius attachment, disc proper, triangular ligament (upper lamina), triangular ligament (lower lamina), lunate bone cartilage face, and triquete bone cartilage face; the findings of both techniques for impaired site around part 6 were compared. Joint morphology was assessed by the gradient-recalled echo (GRE) method with T2-weighted images, and the cartilage side was analyzed the fast SE (FSE) method with proton density-weighted image. Three orthopedic surgeons and 1 radiologist interpreted the results. The impaired site was verified in all 16 patients by high-resolution MRI using a microscopy coil. The MRI findings were as follows radius attachment in 2 patients, disc proper in 4, upper lamina in 7, lower lamina in 5, lunate bone cartilage face in 3, and triquete bone cartilage face in 0. The frequency of injury according to arthroscopic findings was as follows: radius attachment in 2 patients, disc proper in 4, lunate bone cartilage face in 6, and triquete bone cartilage face in 0. The sensibility/specificity of arthroscopic findings in comparison with MRI findings was as follows: radius attachment 100%/100%, disc proper 75%/91.7%, lunate bone cartilage face 50%/100%, and triquete bone cartilage face 0%/100%. Eight of 16 patients had depression of TFCC tone, and the sensibility/specificity of arthroscopic findings in comparison with MRI findings for the depression of site and TFCC tone was as follows: upper lamina 75%/87.5% and lower lamina 50%/87.5%. High

  6. Abdominal alterations in disseminated paracoccidioidomycosis: computed tomography findings

    Energy Technology Data Exchange (ETDEWEB)

    Vermelho, Marli Batista Fernandes; Correia, Ademir Silva; Michailowsky, Tania Cibele de Almeida; Suzart, Elizete Kazumi Kuniyoshi; Ibanes, Aline Santos; Almeida, Lanamar Aparecida; Khoury, Zarifa; Barba, Mario Flores, E-mail: marlivermelho@globo.com [Instituto de Infectologia Emilio Ribas (IIER), Sao Paulo, SP (Brazil)

    2015-03-15

    Objective: to evaluate the incidence and spectrum of abdominal computed tomography imaging findings in patients with paracoccidioidomycosis. Materials and methods: retrospective analysis of abdominal computed tomography images of 26 patients with disseminated paracoccidioidomycosis. Results: abnormal abdominal tomographic findings were observed in 18 patients (69.2%), while no significant finding was observed in the other 8 (30.8%) patients. Conclusion: computed tomography has demonstrated to play a relevant role in the screening and detection of abdominal abnormalities in patients with disseminated paracoccidioidomycosis. (author)

  7. Clinical findings just after return to play predict hamstring re-injury, but baseline MRI findings do not

    NARCIS (Netherlands)

    R.J. de Vos (Robert-Jan); G. Reurink (Gustaaf); G.J. Goudswaard (Gert Jan); M.H. Moen (Maaike); A. Weir (Adam); J.L. Tol (Johannes)

    2014-01-01

    markdownabstract__Abstract__ Background Acute hamstring re-injuries are common and hard to predict. The aim of this study was to investigate the association between clinical and imaging findings and the occurrence of hamstring re-injuries. Methods We obtained baseline data (clinical and MRI

  8. GALAXIES IN X-RAY GROUPS. II. A WEAK LENSING STUDY OF HALO CENTERING

    Energy Technology Data Exchange (ETDEWEB)

    George, Matthew R.; Ma, Chung-Pei [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Leauthaud, Alexie; Bundy, Kevin [Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), Todai Institutes for Advanced Study, University of Tokyo, Kashiwa 277-8583 (Japan); Finoguenov, Alexis [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Rykoff, Eli S. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Tinker, Jeremy L. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Massey, Richard [Department of Physics, University of Durham, South Road, Durham DH1 3LE (United Kingdom); Mei, Simona, E-mail: mgeorge@astro.berkeley.edu [Bureau des Galaxies, Etoiles, Physique, Instrumentation (GEPI), University of Paris Denis Diderot, F-75205 Paris Cedex 13 (France)

    2012-09-20

    Locating the centers of dark matter halos is critical for understanding the mass profiles of halos, as well as the formation and evolution of the massive galaxies that they host. The task is observationally challenging because we cannot observe halos directly, and tracers such as bright galaxies or X-ray emission from hot plasma are imperfect. In this paper, we quantify the consequences of miscentering on the weak lensing signal from a sample of 129 X-ray-selected galaxy groups in the COSMOS field with redshifts 0 < z < 1 and halo masses in the range 10{sup 13}-10{sup 14} M{sub Sun }. By measuring the stacked lensing signal around eight different candidate centers (such as the brightest member galaxy, the mean position of all member galaxies, or the X-ray centroid), we determine which candidates best trace the center of mass in halos. In this sample of groups, we find that massive galaxies near the X-ray centroids trace the center of mass to {approx}< 75 kpc, while the X-ray position and centroids based on the mean position of member galaxies have larger offsets primarily due to the statistical uncertainties in their positions (typically {approx}50-150 kpc). Approximately 30% of groups in our sample have ambiguous centers with multiple bright or massive galaxies, and some of these groups show disturbed mass profiles that are not well fit by standard models, suggesting that they are merging systems. We find that halo mass estimates from stacked weak lensing can be biased low by 5%-30% if inaccurate centers are used and the issue of miscentering is not addressed.

  9. Granulomatous mastitis: radiological findings

    International Nuclear Information System (INIS)

    Ozturk, M.; Mavili, E.; Kahriman, G.; Akcan, A.C.; Ozturk, F.

    2007-01-01

    Purpose: To evaluate the radiological, ultrasonographic, and magnetic resonance imaging (MRI) findings of idiopathic granulomatous mastitis. Material and Methods: Between April 2002 and June 2005, the mammography, ultrasound, color Doppler ultrasound, non enhanced MR, and dynamic MR findings of nine patients with the preliminary clinical diagnosis of malignancy and the final diagnosis of granulomatous mastitis were evaluated. Results: On mammography, asymmetrical focal densities with no distinct margins, ill-defined masses with spiculated contours, and bilateral multiple ill-defined nodules were seen. On ultrasound, in four patients a discrete, heterogenous hypoechoic mass, in two patients multiple abscesses, in one patient bilateral multiple central hypo peripheral hyperechoic lesions, in two patients heterogeneous hypo- and hyperechoic areas together with parenchymal distortion, and in one patient irregular hypoechoic masses with tubular extensions and abscess cavities were seen. Five of the lesions were vascular on color Doppler ultrasound. On MR mammography, the most frequent finding was focal or diffuse asymmetrical signal intensity changes that were hypointense on T1W images and hyperintense on T2W images, without significant mass effect. Nodular lesions were also seen. On dynamic contrast-enhanced mammography, mass-like enhancement, ring-like enhancement, and nodular enhancement were seen. The time-intensity curves differed from patient to patient and from lesion to lesion. Conclusion: The imaging findings of idiopathic granulomatous mastitis have a wide spectrum, and they are inconclusive for differentiating malignant and benign lesions

  10. Granulomatous mastitis: radiological findings

    Energy Technology Data Exchange (ETDEWEB)

    Ozturk, M.; Mavili, E.; Kahriman, G.; Akcan, A.C.; Ozturk, F. [Depts. of Radiology, Surgery, and Pathology, Erciyes Univ. Medical Faculty, Kayseri (Turkey)

    2007-02-15

    Purpose: To evaluate the radiological, ultrasonographic, and magnetic resonance imaging (MRI) findings of idiopathic granulomatous mastitis. Material and Methods: Between April 2002 and June 2005, the mammography, ultrasound, color Doppler ultrasound, non enhanced MR, and dynamic MR findings of nine patients with the preliminary clinical diagnosis of malignancy and the final diagnosis of granulomatous mastitis were evaluated. Results: On mammography, asymmetrical focal densities with no distinct margins, ill-defined masses with spiculated contours, and bilateral multiple ill-defined nodules were seen. On ultrasound, in four patients a discrete, heterogenous hypoechoic mass, in two patients multiple abscesses, in one patient bilateral multiple central hypo peripheral hyperechoic lesions, in two patients heterogeneous hypo- and hyperechoic areas together with parenchymal distortion, and in one patient irregular hypoechoic masses with tubular extensions and abscess cavities were seen. Five of the lesions were vascular on color Doppler ultrasound. On MR mammography, the most frequent finding was focal or diffuse asymmetrical signal intensity changes that were hypointense on T1W images and hyperintense on T2W images, without significant mass effect. Nodular lesions were also seen. On dynamic contrast-enhanced mammography, mass-like enhancement, ring-like enhancement, and nodular enhancement were seen. The time-intensity curves differed from patient to patient and from lesion to lesion. Conclusion: The imaging findings of idiopathic granulomatous mastitis have a wide spectrum, and they are inconclusive for differentiating malignant and benign lesions.

  11. The Influence Of Learning Model Guided Findings Of Student Learning Outcomes

    Directory of Open Access Journals (Sweden)

    A. SaefulBahri

    2015-03-01

    Full Text Available Abstract This study examines the influence of the learning model guided findings on student learning outcomes in subjects PAI eighth grade students of SMP Plus al Masoem. The research method used in this study is a quantitative method in the form of quasi-experiment Quasi-Experimental Design. The findings of the study are expected to demonstrate 1 the difference significant increase in learning outcomes between the experimental class using guided discovery method that uses the control class discussion of learning models 2 Constraints in the method of guided discovery activities and the limited ability of educators in the experimental class in implements the method of guided discovery and constraints faced by students while digging the information they need so we need special strategies to motivate students in the experimental class in order for them creatively find the right way to gather information that supports learning PAI.

  12. Study of coupled nonlinear partial differential equations for finding exact analytical solutions.

    Science.gov (United States)

    Khan, Kamruzzaman; Akbar, M Ali; Koppelaar, H

    2015-07-01

    Exact solutions of nonlinear partial differential equations (NPDEs) are obtained via the enhanced (G'/G)-expansion method. The method is subsequently applied to find exact solutions of the Drinfel'd-Sokolov-Wilson (DSW) equation and the (2+1)-dimensional Painlevé integrable Burgers (PIB) equation. The efficiency of this method for finding these exact solutions is demonstrated. The method is effective and applicable for many other NPDEs in mathematical physics.

  13. Equilibrium and stability of off-axis periodically focused particle beams

    International Nuclear Information System (INIS)

    Moraes, J.S.; Pakter, R.; Rizzato, F.B.

    2004-01-01

    A general equation for the centroid motion of free, continuous, intense beams propagating off axis in solenoidal periodic focusing fields is derived. The centroid equation is found to be independent of the specific beam distribution and may exhibit unstable solutions. A new Vlasov equilibrium for off-axis beam propagation is also obtained. The properties of the equilibrium and the relevance of centroid motion to beam confinement are discussed

  14. Investigation of the relationship between gross tumor volume location and pneumonitis rates using a large clinical database of non-small-cell lung cancer patients.

    Science.gov (United States)

    Vinogradskiy, Yevgeniy; Tucker, Susan L; Liao, Zhongxing; Martel, Mary K

    2012-04-01

    Studies have suggested that function may vary throughout the lung, and that patients who have tumors located in the base of the lung are more susceptible to radiation pneumonitis. The purpose of our study was to investigate the relationship between gross tumor volume (GTV) location and pneumonitis rates using a large clinical database of 547 patients with non-small-cell lung cancer. The GTV centroids of all patients were mapped onto one common coordinate system, in which the boundaries of the coordinate system were defined by the extreme points of each individual patient lung. The data were qualitatively analyzed by graphing all centroids and displaying the data according to the presence of severe pneumonitis, tumor stage, and smoking status. The centroids were grouped according to superior-inferior segments, and the pneumonitis rates were analyzed. In addition, we incorporated the GTV centroid information into a Lyman-Kutcher-Burman normal tissue complication probability model and tested whether adding spatial information significantly improved the fit of the model. Of the 547 patients analyzed, 111 (20.3%) experienced severe radiation pneumonitis. The pneumonitis incidence rates were 16%, 23%, and 21% for the superior, middle, and inferior thirds of the lung, respectively. Qualitatively, the GTV centroids of nonsmokers were notably absent from the superior portion of the lung. In addition, the GTV centroids of patients who had Stage III and IV clinical staging were concentrated toward the medial edge of the lung. The comparison between the GTV centroid model and the conventional dose-volume model did not yield a statistically significant difference in model fit. Lower pneumonitis rates were noted for the superior portion of the lung; however the differences were not statistically significant. For our patient cohort, incorporating GTV centroid information did not lead to a statistically significant improvement in the fit of the pneumonitis model. Copyright

  15. Method to Find Recovery Event Combinations in Probabilistic Safety Assessment

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Riley, Jeff

    2016-01-01

    These research activities may develop mathematical methods, engineering analyses, and business processes. The research activities of the project covered by this scope are directed toward the specific issues of implementing the methods and strategies on a computational platform, identifying the features and enhancements to EPRI tools that would be necessary to realize significant improvements to the risk assessments performed by the end user. Fault tree analysis is extensively and successfully applied to the risk assessment of safety-critical systems such as nuclear, chemical and aerospace systems. The fault tree analysis is being used together with an event tree analysis in PSA of nuclear power plants. Fault tree solvers for a PSA are mostly based on the cutset-based algorithm. They generate minimal cut sets (MCSs) from a fault tree. The most popular fault tree solver in the PSA industry is FTREX. During the course of this project, certain technical issues (see Sections 2 to 5) have been identified that need to be addressed regarding how minimal cut sets are generated and quantified. The objective of this scope of the work was to develop new methods or techniques to address these technical limitations. By turning on all the cutset initiators (%1, %2, %3, %), all the possible minimal cut sets can be calculated easier than with the original fault tree. It is accomplished by the fact that the number of events in the minimal cut sets are significantly reduced by using cutset initiators instead of random failure events. And byy turning on a few chosen cutset initiators and turning off the other cutset initiators, minimal cut sets of the selected cutset initiator(s) can be easily calculated. As explained in the previous Sections, there is no way to calculate these minimal cut sets by turning off/on the random failure events in the original fault tree

  16. Method to Find Recovery Event Combinations in Probabilistic Safety Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong University, Seoul (Korea, Republic of); Riley, Jeff [Electric Power Research, Palo Alto (United States)

    2016-05-15

    These research activities may develop mathematical methods, engineering analyses, and business processes. The research activities of the project covered by this scope are directed toward the specific issues of implementing the methods and strategies on a computational platform, identifying the features and enhancements to EPRI tools that would be necessary to realize significant improvements to the risk assessments performed by the end user. Fault tree analysis is extensively and successfully applied to the risk assessment of safety-critical systems such as nuclear, chemical and aerospace systems. The fault tree analysis is being used together with an event tree analysis in PSA of nuclear power plants. Fault tree solvers for a PSA are mostly based on the cutset-based algorithm. They generate minimal cut sets (MCSs) from a fault tree. The most popular fault tree solver in the PSA industry is FTREX. During the course of this project, certain technical issues (see Sections 2 to 5) have been identified that need to be addressed regarding how minimal cut sets are generated and quantified. The objective of this scope of the work was to develop new methods or techniques to address these technical limitations. By turning on all the cutset initiators (%1, %2, %3, %), all the possible minimal cut sets can be calculated easier than with the original fault tree. It is accomplished by the fact that the number of events in the minimal cut sets are significantly reduced by using cutset initiators instead of random failure events. And byy turning on a few chosen cutset initiators and turning off the other cutset initiators, minimal cut sets of the selected cutset initiator(s) can be easily calculated. As explained in the previous Sections, there is no way to calculate these minimal cut sets by turning off/on the random failure events in the original fault tree.

  17. Breakdown Breakthrough: NREL Finds Easier Ways to Deconstruct Biomass |

    Science.gov (United States)

    News | NREL Breakdown Breakthrough: NREL Finds Easier Ways to Deconstruct Biomass Breakdown Breakthrough: NREL Finds Easier Ways to Deconstruct Biomass May 22, 2018 Roman Brunecky (left), Yannick Bomble soften biomass. Photo by Dennis Schroeder, NREL If there's an easier, more efficient method, science will

  18. MRI findings in the painful hemiplegic shoulder

    International Nuclear Information System (INIS)

    Tavora, D.G.F.; Gama, R.L.; Bomfim, R.C.; Nakayama, M.; Silva, C.E.P.

    2010-01-01

    Aim: To evaluate the magnetic resonance imaging (MRI) findings in painful hemiplegic shoulder (PHS) in hemiplegic post-stroke patients. Materials and methods: Patients with hemiplegia following their first cerebrovascular accident who were admitted to the Sarah Network of Hospitals for Rehabilitation were studied. Forty-five patients with pain in the hemiplegic shoulder and 23 post-stroke patients without shoulder pain were investigated. MRI and radiographic findings of the hemiplegic and contralateral asymptomatic shoulders were evaluated. Results: Some MRI findings were more frequent in PHS group, including synovial capsule thickening, synovial capsule enhancement, and enhancement in the rotator cuff interval. Conclusions: Adhesive capsulitis was found to be a possible cause of PHS.

  19. MRI findings in the painful hemiplegic shoulder

    Energy Technology Data Exchange (ETDEWEB)

    Tavora, D.G.F., E-mail: danielgurgel@sarah.b [Department of Radiology, Sarah Network of Hospitals for Rehabilitation, Fortaleza (Brazil); Gama, R.L.; Bomfim, R.C. [Department of Radiology, Sarah Network of Hospitals for Rehabilitation, Fortaleza (Brazil); Nakayama, M. [Department of Radiology, Federal University of Grande Dourados, Dourados (Brazil); Silva, C.E.P. [Department of Statistics, Sarah Network of Hospitals for Rehabilitation, Fortaleza (Brazil)

    2010-10-15

    Aim: To evaluate the magnetic resonance imaging (MRI) findings in painful hemiplegic shoulder (PHS) in hemiplegic post-stroke patients. Materials and methods: Patients with hemiplegia following their first cerebrovascular accident who were admitted to the Sarah Network of Hospitals for Rehabilitation were studied. Forty-five patients with pain in the hemiplegic shoulder and 23 post-stroke patients without shoulder pain were investigated. MRI and radiographic findings of the hemiplegic and contralateral asymptomatic shoulders were evaluated. Results: Some MRI findings were more frequent in PHS group, including synovial capsule thickening, synovial capsule enhancement, and enhancement in the rotator cuff interval. Conclusions: Adhesive capsulitis was found to be a possible cause of PHS.

  20. Improvements to the Root Finding Algorithm in VBBinaryLensing

    Science.gov (United States)

    Heintz, Tyler; Hoag, Ava; Bozza, Valerio; Oberst, Thomas

    2018-01-01

    VBBinaryLensing is the leading code for fitting magnification maps to microlensing events by binary lenses, most especially for exoplanet discoveries. The majority of the code’s runtime is devoted to finding roots of a fifth order complex polynomial arising from the binary lens equation. This is currently accomplished via the ZROOTS subroutine, which employs Laguerre’s method. Skowron and Gould’s (2012) algorithm, which employs a combination of Laguerre’s and Newton’s methods, presents the possibility of faster runtimes. We introduce a C++ translation of Skowron and Gould’s algorithm and test it as a replacement for ZROOTS in VBBinaryLensing. We find that this new implementation both reduces the time spent finding roots and makes the code more robust in extreme regimes.

  1. The influence of image sensor irradiation damage on the tracking and pointing accuracy of optical communication system

    Science.gov (United States)

    Li, Xiaoliang; Luo, Lei; Li, Pengwei; Yu, Qingkui

    2018-03-01

    The image sensor in satellite optical communication system may generate noise due to space irradiation damage, leading to deviation for the determination of the light spot centroid. Based on the irradiation test data of CMOS devices, simulated defect spots in different sizes have been used for calculating the centroid deviation value by grey-level centroid algorithm. The impact on tracking & pointing accuracy of the system has been analyzed. The results show that both the amount and the position of irradiation-induced defect pixels contribute to spot centroid deviation. And the larger spot has less deviation. At last, considering the space radiation damage, suggestions are made for the constraints of spot size selection.

  2. THE CORES OF THE Fe Kα LINES IN ACTIVE GALACTIC NUCLEI: AN EXTENDED CHANDRA HIGH ENERGY GRATING SAMPLE

    International Nuclear Information System (INIS)

    Shu, X. W.; Wang, J. X.; Yaqoob, T.

    2010-01-01

    We extend the study of the core of the Fe Kα emission line at ∼6.4 keV in Seyfert galaxies reported by Yaqoob and Padmanabhan using a larger sample observed by the Chandra high-energy grating (HEG). The sample consists of 82 observations of 36 unique sources with z H 23 cm -2 ) Seyfert galaxies to date. From an empirical and uniform analysis, we present measurements of the Fe Kα line centroid energy, flux, equivalent width (EW), and intrinsic width (FWHM). The Fe Kα line is detected in 33 sources, and its centroid energy is constrained in 32 sources. In 27 sources, the statistical quality of the data is good enough to yield measurements of the FWHM. We find that the distribution in the line centroid energy is strongly peaked around the value for neutral Fe, with over 80% of the observations giving values in the range 6.38-6.43 keV. Including statistical errors, 30 out of 32 sources (∼94%) have a line centroid energy in the range 6.35-6.47 keV. The mean EW, among the observations in which a non-zero lower limit could be measured, was 53 ± 3 eV. The mean FWHM from the subsample of 27 sources was 2060 ± 230 km s -1 . The mean EW and FWHM are somewhat higher when multiple observations for a given source are averaged. From a comparison with the Hβ optical emission-line widths (or, for one source, Brα), we find that there is no universal location of the Fe Kα line-emitting region relative to the optical broad-line region (BLR). In general, a given source may have contributions to the Fe Kα line flux from parsec-scale distances from the putative black hole, down to matter a factor ∼2 closer to the black hole than the BLR. We confirm the presence of the X-ray Baldwin effect, an anti-correlation between the Fe Kα line EW and X-ray continuum luminosity. The HEG data have enabled isolation of this effect to the narrow core of the Fe Kα line.

  3. The Dietary Patterns Methods Project: Synthesis of Findings across Cohorts and Relevance to Dietary Guidance1234

    Science.gov (United States)

    Liese, Angela D; Krebs-Smith, Susan M; Subar, Amy F; George, Stephanie M; Harmon, Brook E; Neuhouser, Marian L; Boushey, Carol J; Schap, TusaRebecca E; Reedy, Jill

    2015-01-01

    The Dietary Patterns Methods Project (DPMP) was initiated in 2012 to strengthen research evidence on dietary indices, dietary patterns, and health for upcoming revisions of the Dietary Guidelines for Americans, given that the lack of consistent methodology has impeded development of consistent and reliable conclusions. DPMP investigators developed research questions and a standardized approach to index-based dietary analysis. This article presents a synthesis of findings across the cohorts. Standardized analyses were conducted in the NIH-AARP Diet and Health Study, the Multiethnic Cohort, and the Women’s Health Initiative Observational Study (WHI-OS). Healthy Eating Index 2010, Alternative Healthy Eating Index 2010 (AHEI-2010), alternate Mediterranean Diet, and Dietary Approaches to Stop Hypertension (DASH) scores were examined across cohorts for correlations between pairs of indices; concordant classifications into index score quintiles; associations with all-cause, cardiovascular disease (CVD), and cancer mortality with the use of Cox proportional hazards models; and dietary intake of foods and nutrients corresponding to index quintiles. Across all cohorts in women and men, there was a high degree of correlation and consistent classifications between index pairs. Higher diet quality (top quintile) was significantly and consistently associated with an 11–28% reduced risk of death due to all causes, CVD, and cancer compared with the lowest quintile, independent of known confounders. This was true for all diet index–mortality associations, with the exception of AHEI-2010 and cancer mortality in WHI-OS women. In all cohorts, survival benefit was greater with a higher-quality diet, and relatively small intake differences distinguished the index quintiles. The reductions in mortality risk started at relatively lower levels of diet quality. Higher scores on each of the indices, signifying higher diet quality, were associated with marked reductions in mortality

  4. Modal mass estimation from ambient vibrations measurement: A method for civil buildings

    Science.gov (United States)

    Acunzo, G.; Fiorini, N.; Mori, F.; Spina, D.

    2018-01-01

    A new method for estimating the modal mass ratios of buildings from unscaled mode shapes identified from ambient vibrations is presented. The method is based on the Multi Rigid Polygons (MRP) model in which each floor of the building is ideally divided in several non-deformable polygons that move independent of each other. The whole mass of the building is concentrated in the centroid of the polygons and the experimental mode shapes are expressed in term of rigid translations and of rotations. In this way, the mass matrix of the building can be easily computed on the basis of simple information about the geometry and the materials of the structure. The modal mass ratios can be then obtained through the classical equation of structural dynamics. Ambient vibrations measurement must be performed according to this MRP models, using at least two biaxial accelerometers per polygon. After a brief illustration of the theoretical background of the method, numerical validations are presented analysing the method sensitivity for possible different source of errors. Quality indexes are defined for evaluating the approximation of the modal mass ratios obtained from a certain MRP model. The capability of the proposed model to be applied to real buildings is illustrated through two experimental applications. In the first one, a geometrically irregular reinforced concrete building is considered, using a calibrated Finite Element Model for validating the results of the method. The second application refers to a historical monumental masonry building, with a more complex geometry and with less information available. In both cases, MRP models with a different number of rigid polygons per floor are compared.

  5. Study of coupled nonlinear partial differential equations for finding exact analytical solutions

    Science.gov (United States)

    Khan, Kamruzzaman; Akbar, M. Ali; Koppelaar, H.

    2015-01-01

    Exact solutions of nonlinear partial differential equations (NPDEs) are obtained via the enhanced (G′/G)-expansion method. The method is subsequently applied to find exact solutions of the Drinfel'd–Sokolov–Wilson (DSW) equation and the (2+1)-dimensional Painlevé integrable Burgers (PIB) equation. The efficiency of this method for finding these exact solutions is demonstrated. The method is effective and applicable for many other NPDEs in mathematical physics. PMID:26587256

  6. MR findings of bowlegs in toddlers

    International Nuclear Information System (INIS)

    Iwasawa, Tae; Inaba, Yutaka; Kameshita, Kikuo; Nishimura, Gen; Aida, Noriko; Matsubara, Sho

    1999-01-01

    Background. Toddlers with severe physiologic tibial bowing are considered to be at risk for the development of Blount's disease. Objective. To correlate MR findings of the knee with the clinical outcome in toddlers with severe physiological tibial bowing. Materials and methods. MR findings were evaluated in 22 affected legs of 14 children with severe tibial bowing (mean age 1.9 years). In 18 affected legs, MR findings were compared with the transition of the tibial metaphyseal-diaphyseal angle (MDA) and tibiofemoral angle (TFA) measured serially between 2 and 3 years of age. Results. MR findings of severe tibial bowing comprised undulation of the posteromedial physis of the tibia (3/22), signal alterations in the medial tibial metaphysis (10/22), T2 prolongation in the posteromedial tibial epiphyseal cartilage (14/22) and signal changes in the medial menisci (18/22). The decrease in the TFA was different in the legs with and without increased signal in the epiphyseal cartilage, and the decrease in the MDA was different in the legs with and without physeal undulation. Conclusion. MR imaging findings can predict the retarded resolution of tibial bowing, which may be a risk factor for the development of Blount's disease. (orig.)

  7. Effects of deformable registration algorithms on the creation of statistical maps for preoperative targeting in deep brain stimulation procedures

    Science.gov (United States)

    Liu, Yuan; D'Haese, Pierre-Francois; Dawant, Benoit M.

    2014-03-01

    Deep brain stimulation, which is used to treat various neurological disorders, involves implanting a permanent electrode into precise targets deep in the brain. Accurate pre-operative localization of the targets on pre-operative MRI sequence is challenging as these are typically located in homogenous regions with poor contrast. Population-based statistical atlases can assist with this process. Such atlases are created by acquiring the location of efficacious regions from numerous subjects and projecting them onto a common reference image volume using some normalization method. In previous work, we presented results concluding that non-rigid registration provided the best result for such normalization. However, this process could be biased by the choice of the reference image and/or registration approach. In this paper, we have qualitatively and quantitatively compared the performance of six recognized deformable registration methods at normalizing such data in poor contrasted regions onto three different reference volumes using a unique set of data from 100 patients. We study various metrics designed to measure the centroid, spread, and shape of the normalized data. This study leads to a total of 1800 deformable registrations and results show that statistical atlases constructed using different deformable registration methods share comparable centroids and spreads with marginal differences in their shape. Among the six methods being studied, Diffeomorphic Demons produces the largest spreads and centroids that are the furthest apart from the others in general. Among the three atlases, one atlas consistently outperforms the other two with smaller spreads for each algorithm. However, none of the differences in the spreads were found to be statistically significant, across different algorithms or across different atlases.

  8. A mixed methods investigation of dropout among talented young dancers: findings from the UK Centres for Advanced Training.

    Science.gov (United States)

    Walker, Imogen J; Nordin-Bates, Sanna M; Redding, Emma

    2012-01-01

    The aim of this study was to understand reasons for dropout from a dance-talent program in the UK, using a mixed methods design. In-depth interviews were conducted with ten dropout students to explore the influencing factors in their decision to leave the program. In order to triangulate these findings, reasons for dropout were then examined from descriptive records of 147 young dancers who had withdrawn from the talent program over a four-year period. Overall, the most frequently cited reasons for dropping out were conflicting demands, change in aspirations, course content, difficulty making friends, and lost passion. Injury, financial factors, low perceived competence, and teacher behavior emerged as minor reasons. Intervention strategies that focus on changes in course content may be the easiest to implement and most effective means to enhance student retention.

  9. K-means-clustering-based fiber nonlinearity equalization techniques for 64-QAM coherent optical communication system.

    Science.gov (United States)

    Zhang, Junfeng; Chen, Wei; Gao, Mingyi; Shen, Gangxiang

    2017-10-30

    In this work, we proposed two k-means-clustering-based algorithms to mitigate the fiber nonlinearity for 64-quadrature amplitude modulation (64-QAM) signal, the training-sequence assisted k-means algorithm and the blind k-means algorithm. We experimentally demonstrated the proposed k-means-clustering-based fiber nonlinearity mitigation techniques in 75-Gb/s 64-QAM coherent optical communication system. The proposed algorithms have reduced clustering complexity and low data redundancy and they are able to quickly find appropriate initial centroids and select correctly the centroids of the clusters to obtain the global optimal solutions for large k value. We measured the bit-error-ratio (BER) performance of 64-QAM signal with different launched powers into the 50-km single mode fiber and the proposed techniques can greatly mitigate the signal impairments caused by the amplified spontaneous emission noise and the fiber Kerr nonlinearity and improve the BER performance.

  10. RADIO ASTROMETRY OF THE CLOSE ACTIVE BINARY HR 5110

    Energy Technology Data Exchange (ETDEWEB)

    Abbuhl, E.; Mutel, R. L.; Lynch, C. [Department of Physics and Astronomy, University of Iowa, Van Allen Hall, Iowa City, Iowa 52242 (United States); Güedel, M. [Department of Astronomy, University of Vienna, Vienna (Austria)

    2015-09-20

    The close active binary HR 5110 was observed at six epochs over 26 days using a global very long baseline interferometry array at 15.4 GHz. We used phase referencing to determine the position of the radio centroid at each epoch with an uncertainty significantly smaller than the component separation. After correcting for proper motion and parallax, we find that the centroid locations of all six epochs have barycenter separations consistent with an emission source located on the KIV secondary, and not in an interaction region between the stars or on the F primary. We used a homogeneous power-law gyrosynchrotron emission model to reproduce the observed flux densities and fractional circular polarization. The resulting ranges of mean magnetic field strength and relativistic electron densities are of the order of 10 G and 10{sup 5} cm{sup −3}, respectively, in the source region.

  11. Collective circular motion in synchronized and balanced formations with second-order rotational dynamics

    Science.gov (United States)

    Jain, Anoop; Ghose, Debasish

    2018-01-01

    This paper considers collective circular motion of multi-agent systems in which all the agents are required to traverse different circles or a common circle at a prescribed angular velocity. It is required to achieve these collective motions with the heading angles of the agents synchronized or balanced. In synchronization, the agents and their centroid have a common velocity direction, while in balancing, the movement of agents causes the location of the centroid to become stationary. The agents are initially considered to move at unit speed around individual circles at different angular velocities. It is assumed that the agents are subjected to limited communication constraints, and exchange relative information according to a time-invariant undirected graph. We present suitable feedback control laws for each of these motion coordination tasks by considering a second-order rotational dynamics of the agent. Simulations are given to illustrate the theoretical findings.

  12. Findings from analysing and quantifying human error using current methods

    International Nuclear Information System (INIS)

    Dang, V.N.; Reer, B.

    1999-01-01

    In human reliability analysis (HRA), the scarcity of data means that, at best, judgement must be applied to transfer to the domain of the analysis what data are available for similar tasks. In particular for the quantification of tasks involving decisions, the analyst has to choose among quantification approaches that all depend to a significant degree on expert judgement. The use of expert judgement can be made more reliable by eliciting relative judgements rather than absolute judgements. These approaches, which are based on multiple criterion decision theory, focus on ranking the tasks to be analysed by difficulty. While these approaches remedy at least partially the poor performance of experts in the estimation of probabilities, they nevertheless require the calibration of the relative scale on which the actions are ranked in order to obtain the probabilities of interest. This paper presents some results from a comparison of some current HRA methods performed in the frame of a study of SLIM calibration options. The HRA quantification methods THERP, HEART, and INTENT were applied to derive calibration human error probabilities for two groups of operator actions. (author)

  13. Depth of interaction detection for {gamma}-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lerche, Ch.W. [Instituto de Aplicaciones de las Tecnologias de la Informacion y de las Comunicaciones Avanzadas, (UPV) Camino de Vera s/n, E46022 (Spain)], E-mail: lerche@ific.uv.es; Doering, M. [Institut fuer Kernphysik, Forschungszentrum Juelich GmbH, D52425 Juelich (Germany); Ros, A. [Institute de Fisica Corpuscular (CSIC-UV), 22085, Valencia E46071 (Spain); Herrero, V.; Gadea, R.; Aliaga, R.J.; Colom, R.; Mateo, F.; Monzo, J.M.; Ferrando, N.; Toledo, J.F.; Martinez, J.D.; Sebastia, A. [Instituto de Aplicaciones de las Tecnologias de la Informacion y de las Comunicaciones Avanzadas, (UPV) Camino de Vera s/n, E46022 (Spain); Sanchez, F.; Benlloch, J.M. [Institute de Fisica Corpuscular (CSIC-UV), 22085, Valencia E46071 (Spain)

    2009-03-11

    A novel design for an inexpensive depth of interaction capable detector for {gamma}-ray imaging has been developed. The design takes advantage of the strong correlation between the width of the scintillation light distribution in monolithic crystals and the interaction depth of {gamma}-rays. We present in this work an inexpensive modification of the commonly used charge dividing circuits which enables the instantaneous and simultaneous computation of the second order moment of light distribution. This measure provides a good estimate for the depth of interaction and does not affect the determination of the position centroids and the energy release of {gamma}-ray impact. The method has been tested with a detector consisting of a monolithic LSO block sized 42x42x10mm{sup 3} and a position-sensitive photomultiplier tube H8500 from Hamamatsu. The mean spatial resolution of the detector was found to be 3.4mm for the position centroids and 4.9mm for the DOI. The best spatial resolutions were observed at the center of the detector and yielded 1.4mm for the position centroids and 1.9mm for the DOI.

  14. Depth of interaction detection for γ-ray imaging

    International Nuclear Information System (INIS)

    Lerche, Ch.W.; Doering, M.; Ros, A.; Herrero, V.; Gadea, R.; Aliaga, R.J.; Colom, R.; Mateo, F.; Monzo, J.M.; Ferrando, N.; Toledo, J.F.; Martinez, J.D.; Sebastia, A.; Sanchez, F.; Benlloch, J.M.

    2009-01-01

    A novel design for an inexpensive depth of interaction capable detector for γ-ray imaging has been developed. The design takes advantage of the strong correlation between the width of the scintillation light distribution in monolithic crystals and the interaction depth of γ-rays. We present in this work an inexpensive modification of the commonly used charge dividing circuits which enables the instantaneous and simultaneous computation of the second order moment of light distribution. This measure provides a good estimate for the depth of interaction and does not affect the determination of the position centroids and the energy release of γ-ray impact. The method has been tested with a detector consisting of a monolithic LSO block sized 42x42x10mm 3 and a position-sensitive photomultiplier tube H8500 from Hamamatsu. The mean spatial resolution of the detector was found to be 3.4mm for the position centroids and 4.9mm for the DOI. The best spatial resolutions were observed at the center of the detector and yielded 1.4mm for the position centroids and 1.9mm for the DOI.

  15. Finding of region of interest in radioisotope scintigraphy's images

    International Nuclear Information System (INIS)

    Glazs, A.; Lubans, A.

    2003-01-01

    The paper is about some problems, which arise, when physicians try to make diagnosis, using information from pictures, which are obtained at radioisotope scintigraphy. The algorithm of obtaining pictures' sets (called GFR) is described in this paper. The possible mistakes in diagnosis are also described. One reason of the mistakes is wrong detection the investigated organ's location. The new method is suggested for detection of organ's location in radioisotope scintigraphy's images' sets. Using of dynamic curves of pixels' intensities is suggested for solving of this problem. It is shown, why using of maximums of such curves is impossible for finding of the investigated organ's location in radioisotope scintigraphy's images sets. The using of integral expression is suggested to solve the problem. The suggested method allows finding and selecting of investigated organ's location in image's sequences (correction is not available in the existing methods). The results of using this method are present. The method can work fully automatically or with manual setting of threshold. (authors)

  16. Spectroscopy of 169Ta

    International Nuclear Information System (INIS)

    Kshetri, Ritesh; Ray, I.; Ganguly, S.; Pradhan, M.K.; Raut, R.; Goswami, A.; Banerjee, P.; Mukherjee, A.; Datta Pramanik, U.; Bhattacharya, S.; Dasmahapatra, B.; Saha Sarkar, M.; Dey, G.; Ray Basu, M.; Krishichayan; Chakraborty, A.; Ghugre, S.S.; Ray, M.; Sarkar, S.

    2006-01-01

    The lifetimes of the isomeric levels of 169,171 Ta have been re-measured using the centroid shift method of electronic timing technique. Preliminary results for lifetimes are in good agreement with the adopted values. Investigations are being carried out to identify other isomeric levels

  17. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  18. Interprofessional teamwork in comprehensive primary healthcare services: Findings from a mixed methods study.

    Science.gov (United States)

    Bentley, Michael; Freeman, Toby; Baum, Fran; Javanparast, Sara

    2018-05-01

    This article draws on data from a 5-year project that examined the effectiveness of Comprehensive primary healthcare (CPHC) in local communities. A hallmark of CPHC services is interprofessional teamwork. Drawing from this study, our article presents factors that enabled, or hindered, healthcare teams working interprofessionally in Australian primary healthcare (PHC) services. The article reports on the experiences of teams working in six Australian PHC services (four managed by state governments, one non-government sexual health organisation, and one Aboriginal community-controlled health service) during a time of significant health sector restructure. Findings are drawn from two key methods: an online survey of practitioners and managers (n = 154), and interviews with managers and practitioners (n = 60) from the six study sites. The majority of survey respondents worked with other health professionals in their service to provide interprofessional care to clients. Processes included formal team meetings, case conferencing, referring clients to other health professionals if needed, informal communication with other health professionals about clients, and team-based delivery of care. A range of interrelated factors affected interprofessional work at the services, from contextual, organisational, processual, and relational domains. Funding cuts and policy changes that saw a reorientation and re-medicalisation of South Australian services undermined interprofessional work, while a shared CPHC culture and commitment among some staff was helpful in resisting some of these effects. The co-location of services was a factor in PHC teams working interprofessionally and not only enabled some PHC teams to work more interprofessionally but also created barriers to interprofessional teamwork through disruption resulting from restructuring of services. Our study indicates the importance of decision makers taking into account the potential effects of policy and structural

  19. Accuracy of MRI findings in chronic lateral ankle ligament injury: Comparison with surgical findings

    Energy Technology Data Exchange (ETDEWEB)

    Park, H.-J. [Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Department of Radiology, Kangwon National University, School of Medicine, Chuncheon (Korea, Republic of); Cha, S.-D. [Department of Orthopedic Surgery, Myongji Hospital, Kwandong University, College of Medicine, Koyang (Korea, Republic of); Kim, S.S. [Department of Radiology, Kangwon National University, School of Medicine, Chuncheon (Korea, Republic of); Rho, M.-H., E-mail: parkhiji@kangwon.ac.kr [Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Kwag, H.-J. [Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Park, N.-H. [Department of Radiology, Myongji Hospital, Kwandong University, College of Medicine, Koyang (Korea, Republic of); Lee, S.-Y. [Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2012-04-15

    Aim: To evaluate the accuracy of magnetic resonance imaging (MRI) findings in chronic lateral ankle ligament injury in comparison with that of surgical findings. Materials and methods: Forty-eight cases (25 men, 23 women, mean age 36 years) of clinically suspected chronic ankle ligament injury underwent MRI studies and surgery. Sagittal, coronal, and axial, T1-weighted, spin-echo, proton density and T2-weighted, fast spin-echo images with fat saturation were obtained in all patients. MRI examinations were read in consensus by two fellowship-trained academic musculoskeletal radiologists who evaluated the lateral ankle ligaments, including the anterior talofibular ligament (ATFL) and calcaneofibular ligament (CFL) without clinical information. The results of the MRI studies were then compared with the surgical findings. Results: The MRI findings of ATFL injury showed a sensitivity of detection of complete tears of 75% and specificity of 86%. The sensitivity of detection of partial tears was 75% and the specificity was 78%. The sensitivity of detection of sprains was 44% and the specificity was 88%. Regarding the MRI findings of CFL injury, the sensitivity of detection of complete tears was 50% and the specificity was 98%. The sensitivity of detection of partial tear was 83% and the specificity was 93%. The sensitivity of detection of sprains was 100% and the specificity was 90%. Regarding the ATFL, the accuracies of detection were 88, 58, 77, and 85% for no injury, sprain, partial tear, and complete tear, respectively, and for the CFL the accuracies of detection were 90, 90, 92, and 96% for no injury, sprain, partial tear, and complete tear, respectively. Conclusions: The diagnosis of a complete tear of the ATFL on MRI is more sensitive than the diagnosis of a complete tear of the CFL. MRI findings of CFL injury are diagnostically specific but are not sensitive. However, only normal findings and complete tears were statistically significant between ATFL and CFL (p

  20. Accuracy of MRI findings in chronic lateral ankle ligament injury: Comparison with surgical findings

    International Nuclear Information System (INIS)

    Park, H.-J.; Cha, S.-D.; Kim, S.S.; Rho, M.-H.; Kwag, H.-J.; Park, N.-H.; Lee, S.-Y.

    2012-01-01

    Aim: To evaluate the accuracy of magnetic resonance imaging (MRI) findings in chronic lateral ankle ligament injury in comparison with that of surgical findings. Materials and methods: Forty-eight cases (25 men, 23 women, mean age 36 years) of clinically suspected chronic ankle ligament injury underwent MRI studies and surgery. Sagittal, coronal, and axial, T1-weighted, spin-echo, proton density and T2-weighted, fast spin-echo images with fat saturation were obtained in all patients. MRI examinations were read in consensus by two fellowship-trained academic musculoskeletal radiologists who evaluated the lateral ankle ligaments, including the anterior talofibular ligament (ATFL) and calcaneofibular ligament (CFL) without clinical information. The results of the MRI studies were then compared with the surgical findings. Results: The MRI findings of ATFL injury showed a sensitivity of detection of complete tears of 75% and specificity of 86%. The sensitivity of detection of partial tears was 75% and the specificity was 78%. The sensitivity of detection of sprains was 44% and the specificity was 88%. Regarding the MRI findings of CFL injury, the sensitivity of detection of complete tears was 50% and the specificity was 98%. The sensitivity of detection of partial tear was 83% and the specificity was 93%. The sensitivity of detection of sprains was 100% and the specificity was 90%. Regarding the ATFL, the accuracies of detection were 88, 58, 77, and 85% for no injury, sprain, partial tear, and complete tear, respectively, and for the CFL the accuracies of detection were 90, 90, 92, and 96% for no injury, sprain, partial tear, and complete tear, respectively. Conclusions: The diagnosis of a complete tear of the ATFL on MRI is more sensitive than the diagnosis of a complete tear of the CFL. MRI findings of CFL injury are diagnostically specific but are not sensitive. However, only normal findings and complete tears were statistically significant between ATFL and CFL (p

  1. Mixed methods evaluation of targeted case finding for cardiovascular disease prevention using a stepped wedged cluster RCT

    Directory of Open Access Journals (Sweden)

    Marshall Tom

    2012-10-01

    Full Text Available Abstract Background A pilot project cardiovascular prevention was implemented in Sandwell (West Midlands, UK. This used electronic primary care records to identify untreated patients at high risk of cardiovascular disease then invited these high risk patients for assessment by a nurse in their own general practice. Those found to be eligible for treatment were offered treatment. During the pilot a higher proportion of high risk patients were started on treatment in the intervention practices than in control practices. Following the apparent success of the prevention project, it was intended to extend the service to all practices across the Sandwell area. However the pilot project was not a robust evaluation. There was a need for an efficient evaluation that would not disrupt the planned rollout of the project. Methods/design Project nurses will sequentially implement targeted cardiovascular case finding in a phased way across all general practices, with the sequence of general practices determined randomly. This is a stepped wedge randomised controlled trial design. The target population is patients aged 35 to 74, without diabetes or cardiovascular disease whose ten-year cardiovascular risk, (determined from data in their electronic records is ≥20%. The primary outcome is the number of high risk patients started on treatment, because these data could be efficiently obtained from electronic primary care records. From this we can determine the effects of the case finding programme on the proportion of high risk patients started on treatment in practices before and after implementation of targeted case finding. Cost-effectiveness will be modelled from the predicted effects of treatments on cardiovascular events and associated health service costs. Alongside the implementation it is intended to interview clinical staff and patients who participated in the programme in order to determine acceptability to patients and clinicians. Practical

  2. Assessing Backwards Integration as a Method of KBO Family Finding

    Science.gov (United States)

    Benfell, Nathan; Ragozzine, Darin

    2018-04-01

    The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.

  3. Theoretical Predictions of Giant Resonances in 94Mo

    Science.gov (United States)

    Golden, Matthew; Bonasera, Giacomo; Shlomo, Shalom

    2016-09-01

    We perform Hartree-Fock based Random Phase Approximation using thirty-three common Skyrme interactions found in the literature for 94Mo. We calculate the strength functions and the Centroid Energies of the Isoscalar Giant Resonances for all multipolarities L0, L1, L2, L3. We compare the calculated Centroid Energies with the experimental value; we also study the Centroid Energy and any correlation it may have with the Nuclear Matter properties of each interaction.

  4. Reliability analysis for radiographic measures of lumbar lordosis in adult scoliosis: a case–control study comparing 6 methods

    Science.gov (United States)

    Hong, Jae Young; Modi, Hitesh N.; Hur, Chang Yong; Song, Hae Ryong; Park, Jong Hoon

    2010-01-01

    Several methods are used to measure lumbar lordosis. In adult scoliosis patients, the measurement is difficult due to degenerative changes in the vertebral endplate as well as the coronal and sagittal deformity. We did the observational study with three examiners to determine the reliability of six methods for measuring the global lumbar lordosis in adult scoliosis patients. Ninety lateral lumbar radiographs were collected for the study. The radiographs were divided into normal (Cobb lordosis measurement decreased with increasing severity of scoliosis. In Cobb L1–S1, centroid and posterior tangent L1–S1 methods, the ICCs were relatively lower in the high-grade scoliosis group (≥0.60). And, the mean absolute difference (MAD) in these methods was high in the high-grade scoliosis group (≤7.17°). However, in the Cobb L1–L5 and posterior tangent L1–L5 method, the ICCs were ≥0.86 in all groups. And, in the TRALL method, the ICCs were ≥0.76 in all groups. In addition, in the Cobb L1–L5 and posterior tangent L1–L5 method, the MAD was ≤3.63°. And, in the TRALL method, the MAD was ≤3.84° in all groups. We concluded that the Cobb L1–L5 and the posterior tangent L1–L5 methods are reliable methods for measuring the global lumbar lordosis in adult scoliosis. And the TRALL method is more reliable method than other methods which include the L5–S1 joint in lordosis measurement. PMID:20437183

  5. Finding joy in social work. II: Intrapersonal sources.

    Science.gov (United States)

    Pooler, David Kenneth; Wolfer, Terry; Freeman, Miriam

    2014-07-01

    Despite the social work profession's strengths orientation, research on its workforce tends to focus on problems (for example, depression, problem drinking, compassion fatigue, burnout). In contrast, this study explored ways in which social workers find joy in their work. The authors used an appreciative inquiry approach, semistructured interviews (N = 26), and a collaborative grounded theory method of analysis. Participants identified interpersonal (making connections and making a difference) and intrapersonal (making meaning and making a life) sources of joy and reflected significant personal initiative in the process of finding joy. The authors present findings regarding these intrapersonal sources of joy.

  6. The TIMSS Videotape Classroom Study: Methods and Findings from an Exploratory Research Project on Eighth-Grade Mathematics Instruction in Germany, Japan, and the United States.

    Science.gov (United States)

    Stigler, James W.; Gonzales, Patrick; Kawanaka, Takako; Knoll, Steffen; Serrano, Ana

    1999-01-01

    Describes the methods and preliminary findings of the Videotape Classroom Study, a video survey of eighth-grade mathematics lessons in Germany, Japan, and the United States. Part of the Third International Mathematics and Science study, this research project is the first study of videotaped records from national probability samples. (SLD)

  7. Meeting report: discussions and preliminary findings on extracellular RNA measurement methods from laboratories in the NIH Extracellular RNA Communication Consortium

    Directory of Open Access Journals (Sweden)

    Louise C. Laurent

    2015-08-01

    Full Text Available Extracellular RNAs (exRNAs have been identified in all tested biofluids and have been associated with a variety of extracellular vesicles, ribonucleoprotein complexes and lipoprotein complexes. Much of the interest in exRNAs lies in the fact that they may serve as signalling molecules between cells, their potential to serve as biomarkers for prediction and diagnosis of disease and the possibility that exRNAs or the extracellular particles that carry them might be used for therapeutic purposes. Among the most significant bottlenecks to progress in this field is the lack of robust and standardized methods for collection and processing of biofluids, separation of different types of exRNA-containing particles and isolation and analysis of exRNAs. The Sample and Assay Standards Working Group of the Extracellular RNA Communication Consortium is a group of laboratories funded by the U.S. National Institutes of Health to develop such methods. In our first joint endeavour, we held a series of conference calls and in-person meetings to survey the methods used among our members, placed them in the context of the current literature and used our findings to identify areas in which the identification of robust methodologies would promote rapid advancements in the exRNA field.

  8. A colorimetric method for highly sensitive and accurate detection of iodide by finding the critical color in a color change process using silver triangular nanoplates

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiu-Hua; Ling, Jian, E-mail: lingjian@ynu.edu.cn; Peng, Jun; Cao, Qiu-E., E-mail: qecao@ynu.edu.cn; Ding, Zhong-Tao; Bian, Long-Chun

    2013-10-10

    Graphical abstract: -- Highlights: •Demonstrated a new colorimetric strategy for iodide detection by silver nanoplates. •The colorimetric strategy is to find the critical color in a color change process. •The colorimetric strategy is more accurate and sensitive than common colorimetry. •Discovered a new morphological transformation phenomenon of silver nanoplates. -- Abstract: In this contribution, we demonstrated a novel colorimetric method for highly sensitive and accurate detection of iodide using citrate-stabilized silver triangular nanoplates (silver TNPs). Very lower concentration of iodide can induce an appreciable color change of silver TNPs solution from blue to yellow by fusing of silver TNPs to nanoparticles, as confirmed by UV–vis absorption spectroscopy and transmission electron microscopy (TEM). The principle of this colorimetric assay is not an ordinary colorimetry, but a new colorimetric strategy by finding the critical color in a color change process. With this strategy, 0.1 μM of iodide can be recognized within 30 min by naked-eyes observation, and lower concentration of iodide down to 8.8 nM can be detected using a spectrophotometer. Furthermore, this high sensitive colorimetric assay has good accuracy, stability and reproducibility comparing with other ordinary colorimetry. We believe this new colorimetric method will open up a fresh insight of simple, rapid and reliable detection of iodide and can find its future application in the biochemical analysis or clinical diagnosis.

  9. A colorimetric method for highly sensitive and accurate detection of iodide by finding the critical color in a color change process using silver triangular nanoplates

    International Nuclear Information System (INIS)

    Yang, Xiu-Hua; Ling, Jian; Peng, Jun; Cao, Qiu-E.; Ding, Zhong-Tao; Bian, Long-Chun

    2013-01-01

    Graphical abstract: -- Highlights: •Demonstrated a new colorimetric strategy for iodide detection by silver nanoplates. •The colorimetric strategy is to find the critical color in a color change process. •The colorimetric strategy is more accurate and sensitive than common colorimetry. •Discovered a new morphological transformation phenomenon of silver nanoplates. -- Abstract: In this contribution, we demonstrated a novel colorimetric method for highly sensitive and accurate detection of iodide using citrate-stabilized silver triangular nanoplates (silver TNPs). Very lower concentration of iodide can induce an appreciable color change of silver TNPs solution from blue to yellow by fusing of silver TNPs to nanoparticles, as confirmed by UV–vis absorption spectroscopy and transmission electron microscopy (TEM). The principle of this colorimetric assay is not an ordinary colorimetry, but a new colorimetric strategy by finding the critical color in a color change process. With this strategy, 0.1 μM of iodide can be recognized within 30 min by naked-eyes observation, and lower concentration of iodide down to 8.8 nM can be detected using a spectrophotometer. Furthermore, this high sensitive colorimetric assay has good accuracy, stability and reproducibility comparing with other ordinary colorimetry. We believe this new colorimetric method will open up a fresh insight of simple, rapid and reliable detection of iodide and can find its future application in the biochemical analysis or clinical diagnosis

  10. Using a Gradient Vector to Find Multiple Periodic Oscillations in Suspension Bridge Models

    Science.gov (United States)

    Humphreys, L. D.; McKenna, P. J.

    2005-01-01

    This paper describes how the method of steepest descent can be used to find periodic solutions of differential equations. Applications to two suspension bridge models are discussed, and the method is used to find non-obvious large-amplitude solutions.

  11. Effect of added purple-fleshed sweet potato and cassava flour on the ...

    African Journals Online (AJOL)

    Using a simplex centroid mixture design method, biscuits were formulated from composite flour developed according to a 10-point design matrix. The effect of flour variation on physical properties (weight, spread and colour) and sensory attributes (colour, aroma, texture and taste) of the formulations were evaluated.

  12. Computationally Efficient Automatic Coast Mode Target Tracking Based on Occlusion Awareness in Infrared Images.

    Science.gov (United States)

    Kim, Sohyun; Jang, Gwang-Il; Kim, Sungho; Kim, Junmo

    2018-03-27

    This paper proposes the automatic coast mode tracking of centroid trackers for infrared images to overcome the target occlusion status. The centroid tracking method, using only the brightness information of an image, is still widely used in infrared imaging tracking systems because it is difficult to extract meaningful features from infrared images. However, centroid trackers are likely to lose the track because they are highly vulnerable to screened status by the clutter or background. Coast mode, one of the tracking modes, maintains the servo slew rate with the tracking rate right before the loss of track. The proposed automatic coast mode tracking method makes decisions regarding entering coast mode by the prediction of target occlusion and tries to re-lock the target and resume the tracking after blind time. This algorithm comprises three steps. The first step is the prediction process of the occlusion by checking both matters which have target-likelihood brightness and which may screen the target despite different brightness. The second step is the process making inertial tracking commands to the servo. The last step is the process of re-locking a target based on the target modeling of histogram ratio. The effectiveness of the proposed algorithm is addressed by presenting experimental results based on computer simulation with various test imagery sequences compared to published tracking algorithms. The proposed algorithm is tested under a real environment with a naval electro-optical tracking system (EOTS) and airborne EO/IR system.

  13. Computationally Efficient Automatic Coast Mode Target Tracking Based on Occlusion Awareness in Infrared Images

    Directory of Open Access Journals (Sweden)

    Sohyun Kim

    2018-03-01

    Full Text Available This paper proposes the automatic coast mode tracking of centroid trackers for infrared images to overcome the target occlusion status. The centroid tracking method, using only the brightness information of an image, is still widely used in infrared imaging tracking systems because it is difficult to extract meaningful features from infrared images. However, centroid trackers are likely to lose the track because they are highly vulnerable to screened status by the clutter or background. Coast mode, one of the tracking modes, maintains the servo slew rate with the tracking rate right before the loss of track. The proposed automatic coast mode tracking method makes decisions regarding entering coast mode by the prediction of target occlusion and tries to re-lock the target and resume the tracking after blind time. This algorithm comprises three steps. The first step is the prediction process of the occlusion by checking both matters which have target-likelihood brightness and which may screen the target despite different brightness. The second step is the process making inertial tracking commands to the servo. The last step is the process of re-locking a target based on the target modeling of histogram ratio. The effectiveness of the proposed algorithm is addressed by presenting experimental results based on computer simulation with various test imagery sequences compared to published tracking algorithms. The proposed algorithm is tested under a real environment with a naval electro-optical tracking system (EOTS and airborne EO/IR system.

  14. The time-walk of analog constant fraction discriminators using very fast scintillator detectors with linear and non-linear energy response

    Energy Technology Data Exchange (ETDEWEB)

    Regis, J.-M., E-mail: regis@ikp.uni-koeln.de [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany); Rudigier, M.; Jolie, J.; Blazhev, A.; Fransen, C.; Pascovici, G.; Warr, N. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)

    2012-08-21

    The electronic {gamma}-{gamma} fast timing technique allows for direct nuclear lifetime determination down to the few picoseconds region by measuring the time difference between two coincident {gamma}-ray transitions. Using high resolution ultra-fast LaBr{sub 3}(Ce) scintillator detectors in combination with the recently developed mirror symmetric centroid difference method, nuclear lifetimes are measured with a time resolving power of around 5 ps. The essence of the method is to calibrate the energy dependent position (centroid) of the prompt response function of the setup which is obtained for simultaneously occurring events. This time-walk of the prompt response function induced by the analog constant fraction discriminator has been determined by systematic measurements using different photomultiplier tubes and timing adjustments of the constant fraction discriminator. We propose a universal calibration function which describes the time-walk or the combined {gamma}-{gamma} time-walk characteristics, respectively, for either a linear or a non-linear amplitude versus energy dependency of the scintillator detector output pulses.

  15. FINGER KNUCKLE PRINT RECOGNITION WITH SIFT AND K-MEANS ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. Muthukumar

    2013-02-01

    Full Text Available In general, the identification and verification are done by passwords, pin number, etc., which is easily cracked by others. Biometrics is a powerful and unique tool based on the anatomical and behavioral characteristics of the human beings in order to prove their authentication. This paper proposes a novel recognition methodology of biometrics named as Finger Knuckle print (FKP. Hence this paper has focused on the extraction of features of Finger knuckle print using Scale Invariant Feature Transform (SIFT, and the key points are derived from FKP are clustered using K-Means Algorithm. The centroid of K-Means is stored in the database which is compared with the query FKP K-Means centroid value to prove the recognition and authentication. The comparison is based on the XOR operation. Hence this paper provides a novel recognition method to provide authentication. Results are performed on the PolyU FKP database to check the proposed FKP recognition method.

  16. Primary cerebral lymphoma: radiological findings

    International Nuclear Information System (INIS)

    Ruiz, J.C.; Grandse, D.; Equidazu, J.; Elizagaray, E.; Grande, J.; Carrandi, J.

    1990-01-01

    We present four cases of primary cerebral lymphoma in non-immunodepressed adult patients. All cases were dsemonstrated with pathological study. CAT study showed solitary or multiple isodense lesions, which incorporated avidly and homoneneously the contrast. Arteriography performed in three patients and magnetic resonance, performed in one did not help for diagnosis. We also review the radiological findings obtained with different imaging methods, and suggest the criteria which could be useful for early diagnosis (Author)

  17. Finding Sums for an Infinite Class of Alternating Series

    Science.gov (United States)

    Chen, Zhibo; Wei, Sheng; Xiao, Xuerong

    2012-01-01

    Calculus II students know that many alternating series are convergent by the Alternating Series Test. However, they know few alternating series (except geometric series and some trivial ones) for which they can find the sum. In this article, we present a method that enables the students to find sums for infinitely many alternating series in the…

  18. Our findings, my method: Framing science in televised interviews.

    Science.gov (United States)

    Armon, Rony; Baram-Tsabari, Ayelet

    2017-11-01

    The public communication of science and technology largely depends on their framing in the news media, but scientists' role in this process has only been explored indirectly. This study focuses on storied accounts told by scientists when asked to present their research or provide expert advice in the course of a news interview. A total of 150 items from a current affairs talk show broadcast in the Israeli media were explored through a methodology combining narrative and conversation analysis. Using the concept of framing as originally proposed by Erving Goffman, we show that researchers use personal accounts as a way of reframing news stories introduced by the program hosts. Elements of method and rationale, which are usually considered technical and are shunned in journalistic reports, emerged as a crucial element in the accounts that experts themselves provided. The implications for framing research and science communication training are discussed.

  19. Blackbody radiation functions and polylogarithms

    International Nuclear Information System (INIS)

    Stewart, Seán M.

    2012-01-01

    A new method based on the polylogarithm function is used to derive an exact expression for the fractional emissive power of a blackbody in any arbitrary spectral band. Compared to all previously used methods the polylogarithm based method is unsurpassed in its simplicity. Displacement laws for the centroid of blackbody radiation in the linear wavelength and frequency spectral representations that make use of the polylogarithm based approach are also given.

  20. Reliability of change in lumbar MRI findings over time in patients with and without disc prosthesis - comparing two different image evaluation methods

    International Nuclear Information System (INIS)

    Berg, Linda; Espeland, Ansgar; Gjertsen, Oeivind; Hellum, Christian; Neckelmann, Gesche; Johnsen, Lars G.; Eide, Geir E.

    2012-01-01

    To assess the reliability of change in lumbar magnetic resonance imaging (MRI) findings evaluated retrospectively by direct comparison of images and by non-comparison. Pre-treatment and 2-year follow-up MRI was performed in 126 patients randomized to disc prosthesis surgery or non-surgical treatment. Two experienced radiologists independently evaluated progress and regress for Modic changes, disc findings, and facet arthropathy (FA) at L3/L4, L4/L5, and L5/S1, both by non-comparison and by comparison of initial and follow-up images. FA was evaluated at all levels, and other findings at non-operated levels. We calculated prevalence- and bias-adjusted kappa (PABAK) values for interobserver agreement. The impact of an adjacent prosthesis (which causes artefacts) and image evaluation method on PABAK was assessed using generalized estimating equations. Image comparison indicated good interobserver agreement on progress and regress (PABAK 0.63-1.00) for Modic changes, posterior high-intensity zone, disc height, and disc contour at L3-S1 and for nucleus pulposus signal and FA at L3/L4; and moderate interobserver agreement (PABAK 0.46-0.59) on decreasing nucleus signal and increasing FA at L4-S1. Image comparison indicated lower (but fair) interobserver agreement (PABAK 0.29) only for increasing FA at L5/S1 in patients with prosthesis in L4/L5 and/or L5/S1. An adjacent prosthesis had no overall impact on PABAK values (p ≥ 0.22). Comparison yielded higher PABAK values than non-comparison (p < 0.001). Regarding changes in lumbar MRI findings over time, comparison of images can provide moderate or good interobserver agreement, and better agreement than non-comparison. An adjacent prosthesis may not reduce agreement on change for most findings. (orig.)

  1. Correlations of CT and EEG findings in brain affections

    International Nuclear Information System (INIS)

    Roth, B.; Nevsimalova, S.; Kvicala, V.

    1984-01-01

    The results were compared of electroencephalography (EEG) and computerized tomography (CT) examinations of 250 patients with different brain affections. In intracranial expansive processes the pre-operative CT findings were positive in 100% cases, the EEG findings in 89.7% of cases. In severe traumatic affections the EEG and CT findings were positive in all cases, in mild injuries and post-traumatic conditions the EEG findings were more frequently positive than the CT. In focal and diffuse vascular affections the EEG and CT findings were consistent, in transitory ischemic conditions the EEG findings were more frequently positive. In inflammatory cerebral affections and in paroxymal diseases the EEG findings were positive more frequently than the CT. The same applies for demyelinating and degenerative affections. Findings of other authors were confirmed to the effect that CT very reliably reveals morphological changes in cerebral tissue while EEG records the functional state of the central nervous system and its changes. The two methods are complementary. (author)

  2. Cf-252 based neutron radiography using real-time image processing system

    International Nuclear Information System (INIS)

    Mochiki, Koh-ichi; Koiso, Manabu; Yamaji, Akihiro; Iwata, Hideki; Kihara, Yoshitaka; Sano, Shigeru; Murata, Yutaka

    2001-01-01

    For compact Cf-252 based neutron radiography, a real-time image processing system by particle counting technique has been developed. The electronic imaging system consists of a supersensitive imaging camera, a real-time corrector, a real-time binary converter, a real-time calculator for centroid, a display monitor and a computer. Three types of accumulated NR image; ordinary, binary and centroid images, can be observed during a measurement. Accumulated NR images were taken by the centroid mode, the binary mode and ordinary mode using of Cf-252 neutron source and those images were compared. The centroid mode presented the sharpest image and its statistical characteristics followed the Poisson distribution, while the ordinary mode showed the smoothest image as the averaging effect by particle bright spots with distributed brightness was most dominant. (author)

  3. On the effects of rotation on interstellar molecular line profiles

    International Nuclear Information System (INIS)

    Adelson, L.M.; Chunming Leung

    1988-01-01

    Theoretical models are constructed to study the effects of systematic gas rotation on the emergent profiles of interstellar molecular lines, in particular the effects of optical depth and different velocity laws. Both rotational and radial motions (expansion or contraction) may produce similar asymmetric profiles, but the behaviour of the velocity centroid of the emergent profile over the whole cloud (iso-centroid maps) can be used to distinguish between these motions. Iso-centroid maps can also be used to determine the location and orientation of the rotation axis and of the equatorial axis. For clouds undergoing both radial and rotational motion, the component of the centroid due to the rotational motion can be separated from that due to the radial motion. Information on the form of the rotational velocity law can also be derived. (author)

  4. CT findings in primary pulmonary lymphomas

    International Nuclear Information System (INIS)

    Cardinale, Luciano; Allasia, Marco; Cataldi, Aldo; Ferraris, Fabrizio; Fava, Cesare; Parvis, Guido

    2005-01-01

    Purpose. To describe the CT findings of pathologically confirmed primary pulmonary lymphomas. Materials and methods. The CT examinations of 11 patients with pathologically proven primary pulmonary lymphoma (9 BALT lymphomas and 2 non-BALT lymphomas) were retrospectively reviewed by three radiologists. Evaluated findings included morphology (consolidation, mass, nodule), number and distribution of lesions. Other CT findings such as air bronchogram, lymphadenopathy atelectasis and pleural effusion were also assessed. Results. Pulmonary lesions were depicted as air-space consolidation (pneumonia-like) in 5 patients (45%), tumour-like rounded opacity in 4 (36%), and nodules in 4 (36%). Multiple and bilateral long lesions were seen in 3 patients (27%). Air bronchogram was present in 7 patients (63%), lymphadenopathy in 3 (27%), atelectasis in 4 (36%) and pleural effusion in only 1 (9%). Conclusions. Our results agree with previous studies regarding lesion patterns and their relative frequency. A smaller number of nodules and of multiple lesions were found compared with some previous studies. The most frequent pattern was airspace consolidation [it

  5. An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS.

    Science.gov (United States)

    Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu

    2017-01-23

    Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.

  6. An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS

    Directory of Open Access Journals (Sweden)

    Zhongyu Li

    2017-01-01

    Full Text Available Bistatic forward-looking SAR (BFSAR is a kind of bistatic synthetic aperture radar (SAR system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I large and unknown range cell migration (RCM (including range walk and high-order RCM; (II the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.

  7. Olkiluoto hydrogeochemistry. A 3-D modelling approach for sparce data set

    International Nuclear Information System (INIS)

    Luukkonen, A.; Partamies, S.; Pitkaenen, P.

    2003-07-01

    Olkiluoto at Eurajoki has been selected as a candidate site for final disposal repository for the used nuclear waste produced in Finland. In the long term safety assessment, one of the principal evaluation tools of safe disposal is hydrogeochemistry. For assessment purposes Posiva Oy excavates in the Olkiluoto bedrock an underground research laboratory (ONKALO). The complexity of the groundwater chemistry is characteristic to the Olkiluoto site and causes a demand to examine and visualise these hydrogeochemical features in 3-D together with the structural model. The need to study the hydrogeochemical features is not inevitable only in the stable undisturbed (pre-excavational) conditions but also in the disturbed system caused by the construction activities and open-tunnel conditions of the ONKALO. The present 3-D approach is based on integrating the independently and separately developed structural model and the results from the geochemical mixing calculations of the groundwater samples. For spatial geochemical regression purposes the study area is divided into four primary sectors on the basis of the occurrence of the samples. The geochemical information within the four primary sector are summed up in the four sector centroids that sum-up the depth distributions of the different water types within each primary sector area. The geographic locations of the centroids are used for secondary division of the study area into secondary sectors. With the aid of secondary sectors spatial regressions between the centroids can be calculated and interpolation of water type fractions within the centroid volume becomes possible. Similarly, extrapolations outside the centroid volume are possible as well. The mixing proportions of the five detected water types in an arbitrary point in the modelling volume can be estimated by applying the four centroids and by using lateral linear regression. This study utilises two separate data sets: the older data set and the newer data set. The

  8. Motion-blurred star acquisition method of the star tracker under high dynamic conditions.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng; Wei, Minsong

    2013-08-26

    The star tracker is one of the most promising attitude measurement devices used in spacecraft due to its extremely high accuracy. However, high dynamic performance is still one of its constraints. Smearing appears, making it more difficult to distinguish the energy dispersive star point from the noise. An effective star acquisition approach for motion-blurred star image is proposed in this work. The correlation filter and mathematical morphology algorithm is combined to enhance the signal energy and evaluate slowly varying background noise. The star point can be separated from most types of noise in this manner, making extraction and recognition easier. Partial image differentiation is then utilized to obtain the motion parameters from only one image of the star tracker based on the above process. Considering the motion model, the reference window is adopted to perform centroid determination. Star acquisition results of real on-orbit star images and laboratory validation experiments demonstrate that the method described in this work is effective and the dynamic performance of the star tracker could be improved along with more identified stars and guaranteed position accuracy of the star point.

  9. A numerical method for finding sign-changing solutions of superlinear Dirichlet problems

    Energy Technology Data Exchange (ETDEWEB)

    Neuberger, J.M.

    1996-12-31

    In a recent result it was shown via a variational argument that a class of superlinear elliptic boundary value problems has at least three nontrivial solutions, a pair of one sign and one which sign changes exactly once. These three and all other nontrivial solutions are saddle points of an action functional, and are characterized as local minima of that functional restricted to a codimension one submanifold of the Hilbert space H-0-1-2, or an appropriate higher codimension subset of that manifold. In this paper, we present a numerical Sobolev steepest descent algorithm for finding these three solutions.

  10. Determination System Of Food Vouchers For the Poor Based On Fuzzy C-Means Method

    Science.gov (United States)

    Anamisa, D. R.; Yusuf, M.; Syakur, M. A.

    2018-01-01

    Food vouchers are government programs to tackle the poverty of rural communities. This program aims to help the poor group in getting enough food and nutrients from carbohydrates. There are several factors that influence to receive the food voucher, such as: job, monthly income, Taxes, electricity bill, size of house, number of family member, education certificate and amount of rice consumption every week. In the execution for the distribution of vouchers is often a lot of problems, such as: the distribution of food vouchers has been misdirected and someone who receives is still subjective. Some of the solutions to decision making have not been done. The research aims to calculating the change of each partition matrix and each cluster using Fuzzy C-Means method. Hopefully this research makes contribution by providing higher result using Fuzzy C-Means comparing to other method for this case study. In this research, decision making is done by using Fuzzy C-Means method. The Fuzzy C-Means method is a clustering method that has an organized and scattered cluster structure with regular patterns on two-dimensional datasets. Furthermore, Fuzzy C-Means method used for calculates the change of each partition matrix. Each cluster will be sorted by the proximity of the data element to the centroid of the cluster to get the ranking. Various trials were conducted for grouping and ranking of proposed data that received food vouchers based on the quota of each village. This testing by Fuzzy C-Means method, is developed and abled for determining the recipient of the food voucher with satisfaction results. Fulfillment of the recipient of the food voucher is 80% to 90% and this testing using data of 115 Family Card from 6 Villages. The quality of success affected, has been using the number of iteration factors is 20 and the number of clusters is 3

  11. Meat intake, cooking methods, dietary carcinogens, and colorectal cancer risk: findings from the Colorectal Cancer Family Registry.

    Science.gov (United States)

    Joshi, Amit D; Kim, Andre; Lewinger, Juan Pablo; Ulrich, Cornelia M; Potter, John D; Cotterchio, Michelle; Le Marchand, Loic; Stern, Mariana C

    2015-06-01

    Diets high in red meat and processed meats are established colorectal cancer (CRC) risk factors. However, it is still not well understood what explains this association. We conducted comprehensive analyses of CRC risk and red meat and poultry intakes, taking into account cooking methods, level of doneness, estimated intakes of heterocyclic amines (HCAs) that accumulate during meat cooking, tumor location, and tumor mismatch repair proficiency (MMR) status. We analyzed food frequency and portion size data including a meat cooking module for 3364 CRC cases, 1806 unaffected siblings, 136 unaffected spouses, and 1620 unaffected population-based controls, recruited into the CRC Family Registry. Odds ratios (OR) and 95% confidence intervals (CI) for nutrient density variables were estimated using generalized estimating equations. We found no evidence of an association between total nonprocessed red meat or total processed meat and CRC risk. Our main finding was a positive association with CRC for pan-fried beefsteak (P(trend) carcinogens relevant for CRC risk. © 2015 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  12. Skin Findings in Renal Transplantation Patients

    Directory of Open Access Journals (Sweden)

    Demet Kartal

    2013-03-01

    Full Text Available Objective: It was aimed to identify skin findings those were seen in patients who undergone renal transplantation. Methods: Patients who have been followed in Erciyes University Nephrology Hospital renal transplantation outpatient clinic were included in the study. They were evaluated for dermatologic findings during routine controls. Age, gender, transplantation date, identity of organ donor, history of medications, dermatological history and dermatological findings during examination were recorded. Biopsy was performed when needed. Results: In total 94 patients, 25 female (26.6% and 69 male (73.4%, were recruited to the study. Mean age was 36±10 years. The most frequent skin finding was drug-related acne (n=20. Most common infectious disease was verruca (n=17. There were viral disease other than verruca such as herpes zoster (n=3, superficial mycosis such as onychomycosis (n=5, tinea versicolor, tinea pedis and bacterial skin disease (n=2, and paronychia (n=1 and pre-malign lesions such as actinic cheilitis and bowenoid papulosis. Besides these, stria (n=3, kserosis (n=2, cornu cutaneum, café-au-lait spots, sebaceous hyperplasia and seborrheic dermatitis, skin tag, hypertrichosis, unguis incarinatus and calcinosis were other skin findings those were seen. No malign skin lesion was observed in any of patients. Conclusion: Miscellaneous skin lesions should develop in patients those undergone renal transplantation due to long-term utilization of various immunosuppressive drugs.

  13. Nike's "Find Your Greatness Campaign" a Discourse Analysis

    OpenAIRE

    Maržić, Dea

    2016-01-01

    The purpose of this B.A. thesis is the discourse analysis of Nikes Find Your Greatness advertising campaign, released at the time of the 2012 Olympics in London. The analysis is preceded by a brief overview of important theories, findings and terminology in the fields of discourse analysis, visual analysis, and advertising. Of a total of twenty individual adverts, the first and last released advertisements were chosen as representative of the main approaches and methods used throughout the ca...

  14. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  15. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  16. Clinical and Proctosigmoidoscopic findings in Patients with ...

    African Journals Online (AJOL)

    Background: Anorectal sepsis is a distressing condition which is sometimes inadequately treated. Objectives: To determine the clinical and prostosigmoidoscopic findings in patients with anorectal sepsis seen by the authors over a 5 year period as well as identifying the commonly performed procedures. Method: A review of ...

  17. Towards predicting the (dis)comfort performance by modelling: methods and findings

    NARCIS (Netherlands)

    Naddeo, A.

    2017-01-01

    The research work underlying this thesis starts from a societal issue: A comfortable artefact helps people to improve their well-being and can be sold easier.
    In order to fulfil these two requirements (wellbeing and companies’ profit) a comfort-driven human-centred design method is

  18. Contrast-enhanced dynamic magnetic resonance imaging findings of hepatocellular carcinoma and their correlation with histopathologic findings

    Energy Technology Data Exchange (ETDEWEB)

    Karahan, Okkes I. [Department of Radiology, Erciyes University Medical Faculty, PK: 18 Talas 38280, Kayseri (Turkey)]. E-mail: oikarahan@yahoo.com; Yikilmaz, Ali [Department of Radiology, Erciyes University Medical Faculty, PK: 18 Talas 38280, Kayseri (Turkey); Artis, Tarik [Department of General Surgery, Erciyes University Medical Faculty, PK: 18 Talas 38280, Kayseri (Turkey); Canoz, Ozlem [Department of Pathology, Erciyes University Medical Faculty, PK: 18 Talas 38280, Kayseri (Turkey); Coskun, Abdulhakim [Department of Radiology, Erciyes University Medical Faculty, PK: 18 Talas 38280, Kayseri (Turkey); Torun, Edip [Department of Internal Medicine, Division of Gastrenterology, Erciyes University Medical Faculty, PK: 18 Talas 38280, Kayseri (Turkey)

    2006-03-15

    Purpose: To investigate the correlations of contrast-enhanced magnetic resonance (MR) imaging findings of large (>5 cm) hepatocellular carcinomas with tumor size and histopathologic findings. Materials and methods: MR imaging was performed in 30 patients with a histopathologic diagnosis of hepatocellular carcinoma. The imaging protocol included non-contrast, hepatic arterial, portal venous and late phases. The signal intensities relative to the liver, enhancement patterns and the morphologic features of the lesions were evaluated in relation to size and degree of differentiation. Results: On histopathologic examination, 12 of 30 (40%) tumors were well-differentiated (grade 1), 6 of 30 (20%) were moderately differentiated (grades 2 and 3) and 12 of 30 (40%) were poorly differentiated (grade 4). Tumor size, tumor boundary, serum alpha-fetoprotein level and portal vein invasion were found to have statistically significant correlations with the degree of differentiation (p < 0.05). Portal vein invasion, capsule formation and tumor surface characteristics showed statistically significant correlations with tumor size (p < 0.05). Conclusion: MR imaging findings of hepatocellular carcinomas larger than 5 cm are partially dependent on tumor size and degree of differentiation.

  19. Modern dose-finding designs for cancer phase I trials drug combinations and molecularly targeted agents

    CERN Document Server

    Hirakawa, Akihiro; Daimon, Takashi; Matsui, Shigeyuki

    2018-01-01

    This book deals with advanced methods for adaptive phase I dose-finding clinical trials for combination of two agents and molecularly targeted agents (MTAs) in oncology. It provides not only methodological aspects of the dose-finding methods, but also software implementations and practical considerations in applying these complex methods to real cancer clinical trials. Thus, the book aims to furnish researchers in biostatistics and statistical science with a good summary of recent developments of adaptive dose-finding methods as well as providing practitioners in biostatistics and clinical investigators with advanced materials for designing, conducting, monitoring, and analyzing adaptive dose-finding trials. The topics in the book are mainly related to cancer clinical trials, but many of those topics are potentially applicable or can be extended to trials for other diseases. The focus is mainly on model-based dose-finding methods for two kinds of phase I trials. One is clinical trials with combinations of tw...

  20. Diagnosis of class using swarm intelligence applied to problem of identification of nuclear transient

    Energy Technology Data Exchange (ETDEWEB)

    Villas Boas Junior, Manoel; Strauss, Edilberto, E-mail: junior@lmp.ufrj.b [Instituto Federal de Educacao, Ciencia e Tecnologia do Ceara/ Universidade do Estado do Ceara, Itaperi, CE (Brazil). Mestrado Integrado em Computacao Aplicada; Nicolau, Andressa dos Santos; Schirru, Roberto, E-mail: andressa@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Mello, Flavio Luis de [Universidade Federal do Rio de Janeiro (POLI/UFRJ), RJ (Brazil). Escola Politecnica. Dept. de Engenharia Eletronica e Computacao

    2011-07-01

    This article presents a computational model of the diagnostic system of transient. The model makes use of segmentation techniques applied to support decision making, based on identification of classes and optimized by Particle Swarm Optimization algorithm (PSO). The method proposed aims to classify an anomalous event in the signatures of three classes of the design basis transients postulated for the Angra 2 nuclear plant, where the PSO algorithm is used as a method of separation of classes, being responsible for finding the best centroid prototype vector of each accident/transient, ie equivalent to Voronoi vector that maximizes the number of correct classifications. To make the calculation of similarity between the set of the variables anomalous event in a given time t, and the prototype vector of variables of accident/transients, the metrics of Manhattan, Euclidean and Minkowski were used. The results obtained by the method proposed were compatible with others methods reported in the literature, allowing a solution that approximates the ideal solution, ie the Voronoi vectors. (author)

  1. Diagnosis of class using swarm intelligence applied to problem of identification of nuclear transient

    International Nuclear Information System (INIS)

    Villas Boas Junior, Manoel; Strauss, Edilberto; Nicolau, Andressa dos Santos; Schirru, Roberto; Mello, Flavio Luis de

    2011-01-01

    This article presents a computational model of the diagnostic system of transient. The model makes use of segmentation techniques applied to support decision making, based on identification of classes and optimized by Particle Swarm Optimization algorithm (PSO). The method proposed aims to classify an anomalous event in the signatures of three classes of the design basis transients postulated for the Angra 2 nuclear plant, where the PSO algorithm is used as a method of separation of classes, being responsible for finding the best centroid prototype vector of each accident/transient, ie equivalent to Voronoi vector that maximizes the number of correct classifications. To make the calculation of similarity between the set of the variables anomalous event in a given time t, and the prototype vector of variables of accident/transients, the metrics of Manhattan, Euclidean and Minkowski were used. The results obtained by the method proposed were compatible with others methods reported in the literature, allowing a solution that approximates the ideal solution, ie the Voronoi vectors. (author)

  2. Cluster analysis of received constellations for optical performance monitoring

    NARCIS (Netherlands)

    van Weerdenburg, J.J.A.; van Uden, R.; Sillekens, E.; de Waardt, H.; Koonen, A.M.J.; Okonkwo, C.

    2016-01-01

    Performance monitoring based on centroid clustering to investigate constellation generation offsets. The tool allows flexibility in constellation generation tolerances by forwarding centroids to the demapper. The relation of fibre nonlinearities and singular value decomposition of intra-cluster

  3. Micro-XANES Determination Fe Speciation in Natural Basalts at Mantle-Relevant fO2

    Science.gov (United States)

    Fischer, R.; Cottrell, E.; Lanzirotti, A.; Kelley, K. A.

    2007-12-01

    We demonstrate that the oxidation state of iron (Fe3+/ΣFe) can be determined with a precision of ±0.02 (10% relative) on natural basalt glasses at mantle-relevant fO2 using Fe K-edge X-ray absorption near edge structure (XANES) spectroscopy. This is equivalent to ±0.25 log unit resolution relative to the QFM buffer. Precise determination of the oxidation state over this narrow range (Fe3+/ΣFe=0.06-0.30) and at low fO2 (down to QFM-2) relies on appropriate standards, high spectral resolution, and highly reproducible methods for extracting the pre-edge centroid position. We equilibrated natural tholeiite powder in a CO/CO2 gas mixing furnace at 1350°C from QFM-3 to QFM+2 to create six glasses of known Fe3+/ΣFe, independently determined by Mössbauer spectroscopy. XANES spectra were collected at station X26A at NSLS, Brookhaven Natl. Lab, in fluorescence mode (9 element Ge array detector) using both Si(111) and Si(311) monochromators. Generally, the energy position of the 1s→3d (pre-edge) transition centroid is the most sensitive monitor of Fe oxidation state using XANES. For the mixture of Fe oxidation states in these glasses and the resulting coordination geometries, the pre-edge spectra are best defined by two multiple 3d crystal field transitions. The Si(311) monochromator, with higher energy resolution, substantially improved spectral resolution for the 1s→3d transition. Dwell times of 5s at 0.1eV intervals across the pre-edge region yielded spectra with the 1s→3d transition peaks clearly resolved. The pre-edge centroid position is highly sensitive to the background subtraction and peak fitting procedures. Differences in fitting models result in small but significant differences in the calculated peak area of each pre-edge multiplet, and the relative contribution of each peak to the calculated centroid. We assessed several schemes and obtained robust centroid positions by simultaneously fitting the background with a damped harmonic oscillator (DHO

  4. Radiographic findings of bisphosphonate-related osteonecrosis of ...

    African Journals Online (AJOL)

    Objective: The aim of this study is to assess radiographic findings of bisphosphonate-related osteonecrosis of the jaws (BRONJ) and to evaluate the efficiency of cone-beam computed tomography (CBCT) and panoramic radiography (PR) by comparing with each other. Materials and Methods: The data of 46 patients treated ...

  5. Statistical properties of antisymmetrized molecular dynamics for non-nucleon-emission and nucleon-emission processes

    International Nuclear Information System (INIS)

    Ono, A.; Horiuchi, H.

    1996-01-01

    Statistical properties of antisymmetrized molecular dynamics (AMD) are classical in the case of nucleon-emission processes, while they are quantum mechanical for the processes without nucleon emission. In order to understand this situation, we first clarify that there coexist mutually opposite two statistics in the AMD framework: One is the classical statistics of the motion of wave packet centroids and the other is the quantum statistics of the motion of wave packets which is described by the AMD wave function. We prove the classical statistics of wave packet centroids by using the framework of the microcanonical ensemble of the nuclear system with a realistic effective two-nucleon interaction. We show that the relation between the classical statistics of wave packet centroids and the quantum statistics of wave packets can be obtained by taking into account the effects of the wave packet spread. This relation clarifies how the quantum statistics of wave packets emerges from the classical statistics of wave packet centroids. It is emphasized that the temperature of the classical statistics of wave packet centroids is different from the temperature of the quantum statistics of wave packets. We then explain that the statistical properties of AMD for nucleon-emission processes are classical because nucleon-emission processes in AMD are described by the motion of wave packet centroids. We further show that when we improve the description of the nucleon-emission process so as to take into account the momentum fluctuation due to the wave packet spread, the AMD statistical properties for nucleon-emission processes change drastically into quantum statistics. Our study of nucleon-emission processes can be conversely regarded as giving another kind of proof of the fact that the statistics of wave packets is quantum mechanical while that of wave packet centroids is classical. copyright 1996 The American Physical Society

  6. Expertise finding in bibliographic network: topic dominance learning approach.

    Science.gov (United States)

    Neshati, Mahmood; Hashemi, Seyyed Hadi; Beigy, Hamid

    2014-12-01

    Expert finding problem in bibliographic networks has received increased interest in recent years. This problem concerns finding relevant researchers for a given topic. Motivated by the observation that rarely do all coauthors contribute to a paper equally, in this paper, we propose two discriminative methods for realizing leading authors contributing in a scientific publication. Specifically, we cast the problem of expert finding in a bibliographic network to find leading experts in a research group, which is easier to solve. We recognize three feature groups that can discriminate relevant experts from other authors of a document. Experimental results on a real dataset, and a synthetic one that is gathered from a Microsoft academic search engine, show that the proposed model significantly improves the performance of expert finding in terms of all common information retrieval evaluation metrics.

  7. Correlation of Imaging Findings with Pathologic Findings of Sclerosing Adenosis

    International Nuclear Information System (INIS)

    Choi, Bo Bae; Shu, Kwang Sun

    2012-01-01

    The purpose of this study was to evaluate the mammographic and sonographic findings of pure sclerosing adenosis. We retrospectively reviewed the mammographic and sonographic findings in 40 cases of pure sclerosing adenosis confirmed by core needle biopsy (n = 23), vacuum-assisted biopsy (n = 7), excision biopsy (n = 9), and lumpectomy (n = 1) from January 2002 to March 2010. All imaging findings were analyzed according to the American College of Radiology (ACR) breast imaging reporting and data system (BI-RADS). Radiologic features were correlated with pathologic findings. Although most mammograms showed negative findings (57%), calcification was the most common abnormal finding of sclerosing adenosis. On sonography, the most common finding was a circumscribed oval hypoechoic mass without posterior features (78%). Most masses showed BI-RADS category 3, (75%, 27/36). Five cases showed categories 4 or 5 (14%, 5/36). Most mammographic and sonographic findings of sclerosing adenosis are non-specific and non-pathognomonic, even though sometimes sclerosing adenosis can be radiologically or histopathologically confused with malignancy

  8. Incidental extra-mammary findings in breast MRI

    International Nuclear Information System (INIS)

    Alduk, A.M.; Prutki, M.; Stern-Padovan, R.

    2015-01-01

    Aim: To investigate the frequency, distribution, and nature of incidental extra-mammary findings detected with breast MRI. Materials and methods: Incidental findings were defined as unexpected lesions outside the breast, not previously known or suspected at the time of referral. Five hundred consecutive breast MRI studies performed from June 2010 to September 2012 were reviewed in this retrospective study for which the institutional review board granted approval and waived the requirement for informed consent. MRI findings were compared with subsequent diagnostic procedures in order to differentiate benign from malignant lesions. Results: One hundred and thirty-eight incidental findings were found in 107 of the 500 (21.4%) examined patients. The most common site was the liver (61/138; 44.2%), followed by the lung (24/138; 17.4%), mediastinum (22/138; 15.9%), pleural cavity (15/138; 10.9%), bone tissue (9/138; 6.5%), spleen (3/138; 2.2%), major pectoral muscle (3/138; 2.2%), and kidney (1/138; 0.7%). Twenty-five of the 138 (18.1%) incidental findings were confirmed to be malignant, whereas the remaining 113 (81.9%) were benign. Malignant findings were exclusively detected in patients with known breast carcinoma, whereas incidental findings in patients without a history of carcinoma were all benign. Twenty-five of 100 (24.8%) incidental findings among patients with history of breast cancer were malignant. Conclusion: Although many of incidental findings were benign, some were malignant, altering the diagnostic work-up, staging, and treatment. Therefore, it is important to assess the entire field of view carefully for abnormalities when reviewing breast MRI studies. - Highlights: • 500 consecutive breast MRI studies were retrospectively reviewed. • Incidental findings were found in 107/500 (21.4%) of examined patients. • Incidental extra-mammary findings on breast MRI are common. • Malignant findings were exclusively detected in patients with known breast

  9. Diagnostic methods in finding out the causes of infertility, results of HSG examination and laparoscopy in infertile women examined at the Gynecological Ward of the City Hospital

    International Nuclear Information System (INIS)

    Kwasniewski, S.; Kukulski, P.; Szymanski, J.; Kwasniewska, A.

    1993-01-01

    The paper presents diagnostic methods and the results of hysterosalpingography (HSG) and laparoscopy examination, with special attention being drawn to the usefulness of these methods in finding out the causes of infertility. 51 patients with primary and second infertility were examined. HSG and laparoscopy make it possible to diagnose correctly the cause of infertility. They also enable adequate and early classification of patients for further diagnosis and treatment at highly specialized centers dealing with the problems of infertility. (author)

  10. Electromagnetic Tracking of Intrafraction Prostate Displacement in Patients Externally Immobilized in the Prone Position

    International Nuclear Information System (INIS)

    Bittner, Nathan; Butler, Wayne M.; Reed, Joshua L.; Murray, Brian C.; Kurko, Brian S.; Wallner, Kent E.; Merrick, Gregory S.

    2010-01-01

    Purpose: To evaluate intrafraction prostate displacement among patients immobilized in the prone position using real-time monitoring of implanted radiofrequency transponders. Methods and Materials: The Calypso localization system was used to track prostate motion in patients receiving external beam radiation therapy (XRT) for prostate cancer. All patients were treated in the prone position and immobilized with a thermoplastic immobilization device. Real-time measurement of prostate displacement was recorded for each treatment fraction. These measurements were used to determine the duration and magnitude of displacement along the three directional axes. Results: The calculated centroid of the implanted transponders was offset from the treatment isocenter by ≥2 mm, ≥3 mm, and ≥4 mm for 38.0%, 13.9%, and 4.5% of the time. In the lateral dimension, the centroid was offset from the treatment isocenter by ≥2 mm, ≥3 mm, and ≥4 mm for 2.7%, 0.4%, and 0.06% of the time. In the superior-inferior dimension, the centroid was offset from the treatment isocenter by ≥2 mm, ≥3 mm, and ≥4 mm for 16.1%, 4.7%, and 1.5% of the time, respectively. In the anterior-posterior dimension, the centroid was offset from the treatment isocenter by ≥2 mm, ≥3 mm, and ≥4 mm for 13.4%, 3.0%, and 0.5% of the time. Conclusions: Intrafraction prostate displacement in the prone position is comparable to that in the supine position. For patients with large girth, in whom the supine position may preclude accurate detection of implanted radiofrequency transponders, treatment in the prone position is a suitable alternative.

  11. Observation of Interfractional Variations in Lung Tumor Position Using Respiratory Gated and Ungated Megavoltage Cone-Beam Computed Tomography

    International Nuclear Information System (INIS)

    Chang, Jenghwa; Mageras, Gig S.; Yorke, Ellen; De Arruda, Fernando; Sillanpaa, Jussi; Rosenzweig, Kenneth E.; Hertanto, Agung; Pham, Hai; Seppi, Edward; Pevsner, Alex; Ling, C. Clifton; Amols, Howard

    2007-01-01

    Purpose: To evaluate the use of megavoltage cone-beam computed tomography (MV CBCT) to measure interfractional variation in lung tumor position. Methods and Materials: Eight non-small-cell lung cancer patients participated in the study, 4 with respiratory gating and 4 without. All patients underwent MV CBCT scanning at weekly intervals. Contoured planning CT and MV CBCT images were spatially registered based on vertebral anatomy, and displacements of the tumor centroid determined. Setup error was assessed by comparing weekly portal orthogonal radiographs with digitally reconstructed radiographs generated from planning CT images. Hypothesis testing was performed to test the statistical significance of the volume difference, centroid displacement, and setup uncertainty. Results: The vertebral bodies and soft tissue portions of tumor within lung were visible on the MV CBCT scans. Statistically significant systematic volume decrease over the course of treatment was observed for 1 patient. The average centroid displacement between simulation CT and MV CBCT scans were 2.5 mm, -2.0 mm, and -1.5 mm with standard deviations of 2.7 mm, 2.7 mm, and 2.6 mm in the right-left, anterior-posterior and superior-inferior directions. The mean setup errors were smaller than the centroid shifts, while the standard deviations were comparable. In most cases, the gross tumor volume (GTV) defined on the MV CBCT was located on average at least 5 mm inside a 10 mm expansion of the GTV defined on the planning CT scan. Conclusions: The MV CBCT technique can be used to image lung tumors and may prove valuable for image-guided radiotherapy. Our conclusions must be verified in view of the small patient number

  12. Interfractional Positional Variability of Fiducial Markers and Primary Tumors in Locally Advanced Non-Small-Cell Lung Cancer During Audiovisual Biofeedback Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Roman, Nicholas O., E-mail: nroman@mcvh-vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA (United States); Shepherd, Wes [Department of Pulmonology, Virginia Commonwealth University, Richmond, VA (United States); Mukhopadhyay, Nitai [Department of Biostatistics, Virginia Commonwealth University, Richmond, VA (United States); Hugo, Geoffrey D.; Weiss, Elisabeth [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA (United States)

    2012-08-01

    Purpose: To evaluate implanted markers as a surrogate for tumor-based setup during image-guided lung cancer radiotherapy with audiovisual biofeedback. Methods and Materials: Seven patients with locally advanced non-small-cell lung cancer were implanted bronchoscopically with gold coils. Markers, tumor, and a reference bony structure (vertebra) were contoured for all 10 phases of the four-dimensional respiration-correlated fan-beam computed tomography and weekly four-dimensional cone-beam computed tomography. Results: The systematic/random interfractional marker-to-tumor centroid displacements were 2/3, 2/2, and 3/3 mm in the x (lateral), y (anterior-posterior), and z (superior-inferior) directions, respectively. The systematic/random interfractional marker-to-bone displacements were 2/3, 2/3, and 2/3 mm in the x, y, and z directions, respectively. The systematic/random tumor-to-bone displacements were 2/3, 2/4, and 4/4 mm in the x, y, and z directions, respectively. All displacements changed significantly over time (p < 0.0001). Conclusions: Although marker-based image guidance may decrease the risk for geometric miss compared with bony anatomy-based positioning, the observed displacements between markers and tumor centroids indicate the need for repeated soft tissue imaging, particularly in situations with large tumor volume change and large initial marker-to-tumor centroid distance.

  13. Contaminant Gradients in Trees: Directional Tree Coring Reveals Boundaries of Soil and Soil-Gas Contamination with Potential Applications in Vapor Intrusion Assessment.

    Science.gov (United States)

    Wilson, Jordan L; Samaranayake, V A; Limmer, Matthew A; Schumacher, John G; Burken, Joel G

    2017-12-19

    Contaminated sites pose ecological and human-health risks through exposure to contaminated soil and groundwater. Whereas we can readily locate, monitor, and track contaminants in groundwater, it is harder to perform these tasks in the vadose zone. In this study, tree-core samples were collected at a Superfund site to determine if the sample-collection location around a particular tree could reveal the subsurface location, or direction, of soil and soil-gas contaminant plumes. Contaminant-centroid vectors were calculated from tree-core data to reveal contaminant distributions in directional tree samples at a higher resolution, and vectors were correlated with soil-gas characterization collected using conventional methods. Results clearly demonstrated that directional tree coring around tree trunks can indicate gradients in soil and soil-gas contaminant plumes, and the strength of the correlations were directly proportionate to the magnitude of tree-core concentration gradients (spearman's coefficient of -0.61 and -0.55 in soil and tree-core gradients, respectively). Linear regression indicates agreement between the concentration-centroid vectors is significantly affected by in planta and soil concentration gradients and when concentration centroids in soil are closer to trees. Given the existing link between soil-gas and vapor intrusion, this study also indicates that directional tree coring might be applicable in vapor intrusion assessment.

  14. Interfractional Positional Variability of Fiducial Markers and Primary Tumors in Locally Advanced Non-Small-Cell Lung Cancer During Audiovisual Biofeedback Radiotherapy

    International Nuclear Information System (INIS)

    Roman, Nicholas O.; Shepherd, Wes; Mukhopadhyay, Nitai; Hugo, Geoffrey D.; Weiss, Elisabeth

    2012-01-01

    Purpose: To evaluate implanted markers as a surrogate for tumor-based setup during image-guided lung cancer radiotherapy with audiovisual biofeedback. Methods and Materials: Seven patients with locally advanced non-small-cell lung cancer were implanted bronchoscopically with gold coils. Markers, tumor, and a reference bony structure (vertebra) were contoured for all 10 phases of the four-dimensional respiration-correlated fan-beam computed tomography and weekly four-dimensional cone-beam computed tomography. Results: The systematic/random interfractional marker-to-tumor centroid displacements were 2/3, 2/2, and 3/3 mm in the x (lateral), y (anterior–posterior), and z (superior–inferior) directions, respectively. The systematic/random interfractional marker-to-bone displacements were 2/3, 2/3, and 2/3 mm in the x, y, and z directions, respectively. The systematic/random tumor-to-bone displacements were 2/3, 2/4, and 4/4 mm in the x, y, and z directions, respectively. All displacements changed significantly over time (p < 0.0001). Conclusions: Although marker-based image guidance may decrease the risk for geometric miss compared with bony anatomy–based positioning, the observed displacements between markers and tumor centroids indicate the need for repeated soft tissue imaging, particularly in situations with large tumor volume change and large initial marker-to-tumor centroid distance.

  15. Genetic diversity of Cytospora chrysospermaisolates obtained from ...

    African Journals Online (AJOL)

    Cluster analysis of the data using Centroid method and Jaccard´s similarity coefficient, divided the isolates into six groups, showing a high genetic diversity among populations of C. chrysosperma. Although there was no correlation between geographical origins and the resulting groups of RAPD analysis, but the amount of ...

  16. Distributed formation tracking using local coordinate systems

    DEFF Research Database (Denmark)

    Yang, Qingkai; Cao, Ming; Garcia de Marina, Hector

    2018-01-01

    This paper studies the formation tracking problem for multi-agent systems, for which a distributed estimator–controller scheme is designed relying only on the agents’ local coordinate systems such that the centroid of the controlled formation tracks a given trajectory. By introducing a gradient...... descent term into the estimator, the explicit knowledge of the bound of the agents’ speed is not necessary in contrast to existing works, and each agent is able to compute the centroid of the whole formation in finite time. Then, based on the centroid estimation, a distributed control algorithm...

  17. Metode RCE-Kmeans untuk Clustering Data

    Directory of Open Access Journals (Sweden)

    Izmy Alwiah Musdar

    2015-07-01

    Abstract  There have been many methods developed to solve the clustering problem. One of them is method in swarm intelligence field such as Particle Swarm Optimization (PSO. Rapid Centroid Estimation (RCE is a method of clustering based Particle Swarm Optimization. RCE, like other variants of PSO clustering, does not depend on initial cluster centers. Moreover, RCE has faster computational time than the previous method like PSC and mPSC. However, RCE has higher standar deviation value than PSC and mPSC in which has impact in the variance of clustering result. It is happaned because of improper equilibrium state, a condition in which the position of the particle does not change anymore, when  the stopping criteria is reached. This study proposes RCE-Kmeans which is a  method applying K-means after the equilibrium state of RCE  reached to update the particle's position which is generated from the RCE method. The results showed that RCE-Kmeans has better quality of the clustering scheme in 7 of 10 datasets compared to K-means and better in 8 of 10 dataset then RCE method. The use of K-means clustering on the RCE method is also able to reduce the standard deviation from RCE method.   Keywords—Data Clustering, Particle Swarm, K-means, Rapid Centroid Estimation.

  18. Finding the Optimum Scenario in Risk-benefit Assessment: An Example on Vitamin D

    DEFF Research Database (Denmark)

    Berjia, Firew Lemma; Hoekstra, J.; Verhagen, H.

    2014-01-01

    when changing from the reference to the optimum scenario. Conclusion: The method allowed us to find the optimum serum level in the vitamin D example. Additional case studies are needed to further validate the applicability of the approach to other nutrients or foods, especially with regards...... a method for finding the optimum scenario that provides maximum net health gains. Methods: A multiple scenario simulation. The method is presented using vitamin D intake in Denmark as an example. In addition to the reference scenario, several alternative scenarios are simulated to detect the scenario...... that provides maximum net health gains. As a common health metric, Disability Adjusted Life Years (DALY) has been used to project the net health effect by using the QALIBRA (Quality of Life for Benefit Risk Assessment) software. Results: The method used in the vitamin D example shows that it is feasible to find...

  19. Infant feeding experiences among teen mothers in North Carolina: Findings from a mixed-methods study

    Directory of Open Access Journals (Sweden)

    Samandari Ghazaleh

    2011-09-01

    Full Text Available Abstract Background Adolescent mothers in the U.S. are much less likely to initiate breastfeeding than older mothers, and teens who do initiate breastfeeding tend to breastfeed for shorter durations. The purpose of this mixed-methods study is to investigate breastfeeding practices, barriers and facilitators among adolescent mothers ages 17 and younger. Methods Quantitative descriptive analyses are conducted using data from the North Carolina Pregnancy Risk Assessment Monitoring System (PRAMS. The population-based sample comprises 389 teens ages 13-17 giving birth to a live born infant in North Carolina in 2000 - 2005 and in 2007. Qualitative analyses are based on in-depth interviews with 22 Black, White and Hispanic teen mothers residing in rural and urban areas of North Carolina conducted between November 2007 and February 2009. Results In quantitative analyses, 52% (196 of 389 of North Carolina teen mothers initiated breastfeeding, but half of those who initiated breastfeeding (92/196 stopped within the first month postpartum. Hispanic teens (44/52 or 89% were much more likely than Black (61/159 or 41% or White teens (87/164 or 52% to initiate breastfeeding and to continue for a longer duration. Nearly sixty two percent (29/52 of Hispanic respondents breastfed for greater than four weeks as compared to 16% (29/159 of Black respondents and 26% (39/164 of White respondents. Common barriers to breastfeeding initiation and continuation included not liking breastfeeding, returning to school, nipple pain, and insufficient milk. Qualitative data provided context for the quantitative findings, elucidating the barriers and facilitators to breastfeeding from the teens' perspective and insight into the ways in which breastfeeding support to teens could be enhanced. Conclusions The large number of adolescents ceasing breastfeeding within the first month points to the need for more individualized follow-up after hospital discharge in the first few days

  20. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)