WorldWideScience

Sample records for highly accurate reconstruction

  1. Reconstruction of high resolution MLC leaf positions using a low resolution detector for accurate 3D dose reconstruction in IMRT

    NARCIS (Netherlands)

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-01-01

    In pre-treatment dose verification, low resolution detector systems are unable to identify shifts of individual leafs of high resolution multi leaf collimator (MLC) systems from detected changes in the dose deposition. The goal of this study was to introduce an alternative approach (the shutter

  2. Fast and accurate phylogenetic reconstruction from high-resolution whole-genome data and a novel robustness estimator.

    Science.gov (United States)

    Lin, Y; Rajan, V; Moret, B M E

    2011-09-01

    The rapid accumulation of whole-genome data has renewed interest in the study of genomic rearrangements. Comparative genomics, evolutionary biology, and cancer research all require models and algorithms to elucidate the mechanisms, history, and consequences of these rearrangements. However, even simple models lead to NP-hard problems, particularly in the area of phylogenetic analysis. Current approaches are limited to small collections of genomes and low-resolution data (typically a few hundred syntenic blocks). Moreover, whereas phylogenetic analyses from sequence data are deemed incomplete unless bootstrapping scores (a measure of confidence) are given for each tree edge, no equivalent to bootstrapping exists for rearrangement-based phylogenetic analysis. We describe a fast and accurate algorithm for rearrangement analysis that scales up, in both time and accuracy, to modern high-resolution genomic data. We also describe a novel approach to estimate the robustness of results-an equivalent to the bootstrapping analysis used in sequence-based phylogenetic reconstruction. We present the results of extensive testing on both simulated and real data showing that our algorithm returns very accurate results, while scaling linearly with the size of the genomes and cubically with their number. We also present extensive experimental results showing that our approach to robustness testing provides excellent estimates of confidence, which, moreover, can be tuned to trade off thresholds between false positives and false negatives. Together, these two novel approaches enable us to attack heretofore intractable problems, such as phylogenetic inference for high-resolution vertebrate genomes, as we demonstrate on a set of six vertebrate genomes with 8,380 syntenic blocks. A copy of the software is available on demand.

  3. An accurate and efficient system model of iterative image reconstruction in high-resolution pinhole SPECT for small animal research

    Energy Technology Data Exchange (ETDEWEB)

    Huang, P-C; Hsu, C-H [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan (China); Hsiao, I-T [Department Medical Imaging and Radiological Sciences, Chang Gung University, Tao-Yuan, Taiwan (China); Lin, K M [Medical Engineering Research Division, National Health Research Institutes, Zhunan Town, Miaoli County, Taiwan (China)], E-mail: cghsu@mx.nthu.edu.tw

    2009-06-15

    Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.

  4. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  5. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  6. Iterative feature refinement for accurate undersampled MR image reconstruction

    Science.gov (United States)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  7. Iterative feature refinement for accurate undersampled MR image reconstruction

    International Nuclear Information System (INIS)

    Wang, Shanshan; Liu, Jianbo; Liu, Xin; Zheng, Hairong; Liang, Dong; Liu, Qiegen; Ying, Leslie

    2016-01-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches. (paper)

  8. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    International Nuclear Information System (INIS)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang- Hee

    2014-01-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time. - Highlights: • The PDS-OSEM reconstructs PET images with iteratively compensating random and scatter corrections from prompt sinogram. • The PDS-OSEM can reconstruct PET images with low count data and data contaminations. • The PDS-OSEM provides less noise and higher quality of reconstructed images than those of OP-OSEM algorithm in statistical sense

  9. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    Energy Technology Data Exchange (ETDEWEB)

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca [Département de physique, de génie physique et d’optique et Centre de recherche sur le cancer de l’Université Laval, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie et Axe Oncologie du Centre de recherche du CHU de Québec, CHU de Québec, 11 Côte du Palais, Québec, Québec G1R 2J6 (Canada); Binnekamp, Dirk [Integrated Clinical Solutions and Marketing, Philips Healthcare, Veenpluis 4-6, Best 5680 DA (Netherlands)

    2015-03-15

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora{sup ®} Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators.

  10. Fast, automatic, and accurate catheter reconstruction in HDR brachytherapy using an electromagnetic 3D tracking system

    International Nuclear Information System (INIS)

    Poulin, Eric; Racine, Emmanuel; Beaulieu, Luc; Binnekamp, Dirk

    2015-01-01

    Purpose: In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this technical note is to evaluate the accuracy and the robustness of an electromagnetic (EM) tracking system for automated and real-time catheter reconstruction. Methods: For this preclinical study, a total of ten catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a 18G biopsy needle, used as an EM stylet and equipped with a miniaturized sensor, and the second generation Aurora ® Planar Field Generator from Northern Digital Inc. The Aurora EM system provides position and orientation value with precisions of 0.7 mm and 0.2°, respectively. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical computed tomography (CT) system with a spatial resolution of 89 μm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, five catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 s, leading to a total reconstruction time inferior to 3 min for a typical 17-catheter implant. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.66 ± 0.33 mm and 1.08 ± 0.72 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be more accurate. A maximum difference of less than 0.6 mm was found between successive EM reconstructions. Conclusions: The EM reconstruction was found to be more accurate and precise than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators

  11. Robust and accurate multi-view reconstruction by prioritized matching

    DEFF Research Database (Denmark)

    Ylimaki, Markus; Kannala, Juho; Holappa, Jukka

    2012-01-01

    a prioritized matching method which expands the most promising seeds first. The output of the method is a three-dimensional point cloud. Unlike previous correspondence growing approaches our method allows to use the best-first matching principle in the generic multi-view stereo setting with arbitrary number...... of input images. Our experiments show that matching the most promising seeds first provides very robust point cloud reconstructions efficiently with just a single expansion step. A comparison to the current state-of-the-art shows that our method produces reconstructions of similar quality but significantly...

  12. Historian: accurate reconstruction of ancestral sequences and evolutionary rates.

    Science.gov (United States)

    Holmes, Ian H

    2017-04-15

    Reconstruction of ancestral sequence histories, and estimation of parameters like indel rates, are improved by using explicit evolutionary models and summing over uncertain alignments. The previous best tool for this purpose (according to simulation benchmarks) was ProtPal, but this tool was too slow for practical use. Historian combines an efficient reimplementation of the ProtPal algorithm with performance-improving heuristics from other alignment tools. Simulation results on fidelity of rate estimation via ancestral reconstruction, along with evaluations on the structurally informed alignment dataset BAliBase 3.0, recommend Historian over other alignment tools for evolutionary applications. Historian is available at https://github.com/evoldoers/historian under the Creative Commons Attribution 3.0 US license. ihholmes+historian@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. Accurate reconstruction of insertion-deletion histories by statistical phylogenetics.

    Directory of Open Access Journals (Sweden)

    Oscar Westesson

    Full Text Available The Multiple Sequence Alignment (MSA is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history, it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes.

  14. Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT

    Science.gov (United States)

    Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee

    2014-03-01

    State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.

  15. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Science.gov (United States)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  16. Analysis of algebraic reconstruction technique for accurate imaging of gas temperature and concentration based on tunable diode laser absorption spectroscopy

    International Nuclear Information System (INIS)

    Xia Hui-Hui; Kan Rui-Feng; Liu Jian-Guo; Xu Zhen-Yu; He Ya-Bai

    2016-01-01

    An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H 2 O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. (paper)

  17. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  18. Digital holography super-resolution for accurate three-dimensional reconstruction of particle holograms.

    Science.gov (United States)

    Verrier, Nicolas; Fournier, Corinne

    2015-01-15

    In-line digital holography (DH) is used in many fields to locate and size micro or nano-objects spread in a volume. To reconstruct simple shaped objects, the optimal approach is to fit an imaging model to accurately estimate their position and their characteristic parameters. Increasing the accuracy of the reconstruction is a big issue in DH, particularly when the pixel is large or the signal-to-noise ratio is low. We suggest exploiting the information redundancy of videos to improve the reconstruction of the holograms by jointly estimating the position of the objects and the characteristic parameters. Using synthetic and experimental data, we checked experimentally that this approach can improve the accuracy of the reconstruction by a factor more than the square root of the image number.

  19. Fast and accurate phylogeny reconstruction using filtered spaced-word matches

    Science.gov (United States)

    Sohrabi-Jahromi, Salma; Morgenstern, Burkhard

    2017-01-01

    Abstract Motivation: Word-based or ‘alignment-free’ algorithms are increasingly used for phylogeny reconstruction and genome comparison, since they are much faster than traditional approaches that are based on full sequence alignments. Existing alignment-free programs, however, are less accurate than alignment-based methods. Results: We propose Filtered Spaced Word Matches (FSWM), a fast alignment-free approach to estimate phylogenetic distances between large genomic sequences. For a pre-defined binary pattern of match and don’t-care positions, FSWM rapidly identifies spaced word-matches between input sequences, i.e. gap-free local alignments with matching nucleotides at the match positions and with mismatches allowed at the don’t-care positions. We then estimate the number of nucleotide substitutions per site by considering the nucleotides aligned at the don’t-care positions of the identified spaced-word matches. To reduce the noise from spurious random matches, we use a filtering procedure where we discard all spaced-word matches for which the overall similarity between the aligned segments is below a threshold. We show that our approach can accurately estimate substitution frequencies even for distantly related sequences that cannot be analyzed with existing alignment-free methods; phylogenetic trees constructed with FSWM distances are of high quality. A program run on a pair of eukaryotic genomes of a few hundred Mb each takes a few minutes. Availability and Implementation: The program source code for FSWM including a documentation, as well as the software that we used to generate artificial genome sequences are freely available at http://fswm.gobics.de/ Contact: chris.leimeister@stud.uni-goettingen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28073754

  20. Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction.

    Science.gov (United States)

    Lefloch, Damien; Kluge, Markus; Sarbolandi, Hamed; Weyrich, Tim; Kolb, Andreas

    2017-12-01

    Interactive real-time scene acquisition from hand-held depth cameras has recently developed much momentum, enabling applications in ad-hoc object acquisition, augmented reality and other fields. A key challenge to online reconstruction remains error accumulation in the reconstructed camera trajectory, due to drift-inducing instabilities in the range scan alignments of the underlying iterative-closest-point (ICP) algorithm. Various strategies have been proposed to mitigate that drift, including SIFT-based pre-alignment, color-based weighting of ICP pairs, stronger weighting of edge features, and so on. In our work, we focus on surface curvature as a feature that is detectable on range scans alone and hence does not depend on accurate multi-sensor alignment. In contrast to previous work that took curvature into consideration, however, we treat curvature as an independent quantity that we consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering. Using multiple benchmark sequences, and in direct comparison to other state-of-the-art online acquisition systems, we show that our approach significantly reduces drift, both when analyzing individual pipeline stages in isolation, as well as seen across the online reconstruction pipeline as a whole.

  1. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    Science.gov (United States)

    Liu, Ruolin; Dickerson, Julie

    2017-11-01

    We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  2. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  3. High-Performance Phylogeny Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Tiffani L. Williams

    2004-11-10

    Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.

  4. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  5. Fourier transform profilometry (FTP) using an innovative band-pass filter for accurate 3-D surface reconstruction

    Science.gov (United States)

    Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc

    2010-02-01

    This article presents a novel band-pass filter for Fourier transform profilometry (FTP) for accurate 3-D surface reconstruction. FTP can be employed to obtain 3-D surface profiles by one-shot images to achieve high-speed measurement. However, its measurement accuracy has been significantly influenced by the spectrum filtering process required to extract the phase information representing various surface heights. Using the commonly applied 2-D Hanning filter, the measurement errors could be up to 5-10% of the overall measuring height and it is unacceptable to various industrial application. To resolve this issue, the article proposes an elliptical band-pass filter for extracting the spectral region possessing essential phase information for reconstructing accurate 3-D surface profiles. The elliptical band-pass filter was developed and optimized to reconstruct 3-D surface models with improved measurement accuracy. Some experimental results verify that the accuracy can be effectively enhanced by using the elliptical filter. The accuracy improvement of 44.1% and 30.4% can be achieved in 3-D and sphericity measurement, respectively, when the elliptical filter replaces the traditional filter as the band-pass filtering method. Employing the developed method, the maximum measured error can be kept within 3.3% of the overall measuring range.

  6. HistoStitcher© : An Interactive Program for Accurate and Rapid Reconstruction of Digitized Whole Histological Sections from Tissue Fragments

    Science.gov (United States)

    Chappelow, Jonathan; Tomaszewski, John E.; Feldman, Michael; Shih, Natalie; Madabhushi, Anant

    2011-01-01

    We present an interactive program called HistoStitcher© for accurate and rapid reassembly of histology fragments into a pseudo-whole digitized histological section. HistoStitcher© provides both an intuitive graphical interface to assist the operator in performing the stitch of adjacent histology fragments by selecting pairs of anatomical landmarks, and a set of computational routines for determining and applying an optimal linear transformation to generate the stitched image. Reconstruction of whole histological sections from images of slides containing smaller fragments is required in applications where preparation of whole sections of large tissue specimens is not feasible or efficient, and such whole mounts are required to facilitate (a) disease annotation and (b) image registration with radiological images. Unlike manual reassembly of image fragments in a general purpose image editing program (such as Photoshop), HistoStitcher© provides memory efficient operation on high resolution digitized histology images and a highly flexible stitching process capable of producing more accurate results in less time. Further, by parameterizing the series of transformations determined by the stitching process, the stitching parameters can be saved, loaded at a later time, refined, or reapplied to multi-resolution scans, or quickly transmitted to another site. In this paper, we describe in detail the design of HistoStitcher© and the mathematical routines used for calculating the optimal image transformation, and demonstrate its operation for stitching high resolution histology quadrants of a prostate specimen to form a digitally reassembled whole histology section, for 8 different patient studies. To evaluate stitching quality, a 6 point scoring scheme, which assesses the alignment and continuity of anatomical structures important for disease annotation, is employed by three independent expert pathologists. For 6 studies compared with this scheme, reconstructed sections

  7. Easy and accurate reconstruction of whole HIV genomes from short-read sequence data with shiver

    Science.gov (United States)

    Blanquart, François; Golubchik, Tanya; Gall, Astrid; Bakker, Margreet; Bezemer, Daniela; Croucher, Nicholas J; Hall, Matthew; Hillebregt, Mariska; Ratmann, Oliver; Albert, Jan; Bannert, Norbert; Fellay, Jacques; Fransen, Katrien; Gourlay, Annabelle; Grabowski, M Kate; Gunsenheimer-Bartmeyer, Barbara; Günthard, Huldrych F; Kivelä, Pia; Kouyos, Roger; Laeyendecker, Oliver; Liitsola, Kirsi; Meyer, Laurence; Porter, Kholoud; Ristola, Matti; van Sighem, Ard; Cornelissen, Marion; Kellam, Paul; Reiss, Peter

    2018-01-01

    Abstract Studying the evolution of viruses and their molecular epidemiology relies on accurate viral sequence data, so that small differences between similar viruses can be meaningfully interpreted. Despite its higher throughput and more detailed minority variant data, next-generation sequencing has yet to be widely adopted for HIV. The difficulty of accurately reconstructing the consensus sequence of a quasispecies from reads (short fragments of DNA) in the presence of large between- and within-host diversity, including frequent indels, may have presented a barrier. In particular, mapping (aligning) reads to a reference sequence leads to biased loss of information; this bias can distort epidemiological and evolutionary conclusions. De novo assembly avoids this bias by aligning the reads to themselves, producing a set of sequences called contigs. However contigs provide only a partial summary of the reads, misassembly may result in their having an incorrect structure, and no information is available at parts of the genome where contigs could not be assembled. To address these problems we developed the tool shiver to pre-process reads for quality and contamination, then map them to a reference tailored to the sample using corrected contigs supplemented with the user’s choice of existing reference sequences. Run with two commands per sample, it can easily be used for large heterogeneous data sets. We used shiver to reconstruct the consensus sequence and minority variant information from paired-end short-read whole-genome data produced with the Illumina platform, for sixty-five existing publicly available samples and fifty new samples. We show the systematic superiority of mapping to shiver’s constructed reference compared with mapping the same reads to the closest of 3,249 real references: median values of 13 bases called differently and more accurately, 0 bases called differently and less accurately, and 205 bases of missing sequence recovered. We also

  8. Accurate 3D reconstruction of bony surfaces using ultrasonic synthetic aperture techniques for robotic knee arthroplasty.

    Science.gov (United States)

    Kerr, William; Rowe, Philip; Pierce, Stephen Gareth

    2017-06-01

    Robotically guided knee arthroplasty systems generally require an individualized, preoperative 3D model of the knee joint. This is typically measured using Computed Tomography (CT) which provides the required accuracy for preoperative surgical intervention planning. Ultrasound imaging presents an attractive alternative to CT, allowing for reductions in cost and the elimination of doses of ionizing radiation, whilst maintaining the accuracy of the 3D model reconstruction of the joint. Traditional phased array ultrasound imaging methods, however, are susceptible to poor resolution and signal to noise ratios (SNR). Alleviating these weaknesses by offering superior focusing power, synthetic aperture methods have been investigated extensively within ultrasonic non-destructive testing. Despite this, they have yet to be fully exploited in medical imaging. In this paper, the ability of a robotic deployed ultrasound imaging system based on synthetic aperture methods to accurately reconstruct bony surfaces is investigated. Employing the Total Focussing Method (TFM) and the Synthetic Aperture Focussing Technique (SAFT), two samples were imaged which were representative of the bones of the knee joint: a human-shaped, composite distal femur and a bovine distal femur. Data were captured using a 5MHz, 128 element 1D phased array, which was manipulated around the samples using a robotic positioning system. Three dimensional surface reconstructions were then produced and compared with reference models measured using a precision laser scanner. Mean errors of 0.82mm and 0.88mm were obtained for the composite and bovine samples, respectively, thus demonstrating the feasibility of the approach to deliver the sub-millimetre accuracy required for the application. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  9. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.

    Science.gov (United States)

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun

    2014-07-01

    4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based

  10. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Hao; Folkerts, Michael; Jiang, Steve B., E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun, E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu [Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 (United States); Zhen, Xin [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515 (China); Li, Yongbao [Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 and Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Pan, Tinsu [Department of Imaging Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas 77030 (United States); Cervino, Laura [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States)

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  11. High-speed reconstruction of compressed images

    Science.gov (United States)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  12. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    AUTHOR|(CDS)2067159

    2016-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  13. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    Goetzmann, Christophe

    2014-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  14. Accurate signal reconstruction for higher order Lagrangian–Eulerian back-coupling in multiphase turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Zwick, D; Balachandar, S [Department of Mechanical and Aerospace Engineering, University of Florida, FL, United States of America (United States); Sakhaee, E; Entezari, A, E-mail: dpzwick@ufl.edu [Department of Computer and Information Science and Engineering, University of Florida, FL, United States of America (United States)

    2017-10-15

    Multiphase flow simulation serves a vital purpose in applications as diverse as engineering design, natural disaster prediction, and even study of astrophysical phenomena. In these scenarios, it can be very difficult, expensive, or even impossible to fully represent the physical system under consideration. Even still, many such real-world applications can be modeled as a two-phase flow containing both continuous and dispersed phases. Consequentially, the continuous phase is thought of as a fluid and the dispersed phase as particles. The continuous phase is typically treated in the Eulerian frame of reference and represented on a fixed grid, while the dispersed phase is treated in the Lagrangian frame and represented by a sample distribution of Lagrangian particles that approximate a cloud. Coupling between the phases requires interpolation of the continuous phase properties at the locations of the Lagrangian particles. This interpolation step is straightforward and can be performed at higher order accuracy. The reverse process of projecting the Lagrangian particle properties from the sample points to the Eulerian grid is complicated by the time-dependent non-uniform distribution of the Lagrangian particles. In this paper we numerically examine three reconstruction, or projection, methods: (i) direct summation (DS), (ii) least-squares, and (iii) sparse approximation. We choose a continuous representation of the dispersed phase property that is systematically varied from a simple single mode periodic signal to a more complex artificially constructed turbulent signal to see how each method performs in reconstruction. In these experiments, we show that there is a link between the number of dispersed Lagrangian sample points and the number of structured grid points to accurately represent the underlying functional representation to machine accuracy. The least-squares method outperforms the other methods in most cases, while the sparse approximation method is able to

  15. High accurate time system of the Low Latitude Meridian Circle.

    Science.gov (United States)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  16. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners.

    Science.gov (United States)

    Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen

    2016-04-11

    Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.

  17. A Two-Phase Space Resection Model for Accurate Topographic Reconstruction from Lunar Imagery with PushbroomScanners

    Directory of Open Access Journals (Sweden)

    Xuemiao Xu

    2016-04-01

    Full Text Available Exterior orientation parameters’ (EOP estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model.

  18. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System

    Energy Technology Data Exchange (ETDEWEB)

    Poulin, E; Racine, E; Beaulieu, L [CHU de Quebec - Universite Laval, Quebec, Quebec (Canada); Binnekamp, D [Integrated Clinical Solutions and Marketing, Philips Healthcare, Best, DA (Netherlands)

    2014-06-15

    Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical.

  19. WE-A-17A-10: Fast, Automatic and Accurate Catheter Reconstruction in HDR Brachytherapy Using An Electromagnetic 3D Tracking System

    International Nuclear Information System (INIS)

    Poulin, E; Racine, E; Beaulieu, L; Binnekamp, D

    2014-01-01

    Purpose: In high dose rate brachytherapy (HDR-B), actual catheter reconstruction protocols are slow and errors prompt. The purpose of this study was to evaluate the accuracy and robustness of an electromagnetic (EM) tracking system for improved catheter reconstruction in HDR-B protocols. Methods: For this proof-of-principle, a total of 10 catheters were inserted in gelatin phantoms with different trajectories. Catheters were reconstructed using a Philips-design 18G biopsy needle (used as an EM stylet) and the second generation Aurora Planar Field Generator from Northern Digital Inc. The Aurora EM system exploits alternating current technology and generates 3D points at 40 Hz. Phantoms were also scanned using a μCT (GE Healthcare) and Philips Big Bore clinical CT system with a resolution of 0.089 mm and 2 mm, respectively. Reconstructions using the EM stylet were compared to μCT and CT. To assess the robustness of the EM reconstruction, 5 catheters were reconstructed twice and compared. Results: Reconstruction time for one catheter was 10 seconds or less. This would imply that for a typical clinical implant of 17 catheters, the total reconstruction time would be less than 3 minutes. When compared to the μCT, the mean EM tip identification error was 0.69 ± 0.29 mm while the CT error was 1.08 ± 0.67 mm. The mean 3D distance error was found to be 0.92 ± 0.37 mm and 1.74 ± 1.39 mm for the EM and CT, respectively. EM 3D catheter trajectories were found to be significantly more accurate (unpaired t-test, p < 0.05). A mean difference of less than 0.5 mm was found between successive EM reconstructions. Conclusion: The EM reconstruction was found to be faster, more accurate and more robust than the conventional methods used for catheter reconstruction in HDR-B. This approach can be applied to any type of catheters and applicators. We would like to disclose that the equipments, used in this study, is coming from a collaboration with Philips Medical

  20. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  1. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    Science.gov (United States)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  2. Accurate reconstruction in digital holographic microscopy using Fresnel dual-tree complex wavelet transform

    Science.gov (United States)

    Zhang, Xiaolei; Zhang, Xiangchao; Yuan, He; Zhang, Hao; Xu, Min

    2018-02-01

    Digital holography is a promising measurement method in the fields of bio-medicine and micro-electronics. But the captured images of digital holography are severely polluted by the speckle noise because of optical scattering and diffraction. Via analyzing the properties of Fresnel diffraction and the topographies of micro-structures, a novel reconstruction method based on the dual-tree complex wavelet transform (DT-CWT) is proposed. This algorithm is shiftinvariant and capable of obtaining sparse representations for the diffracted signals of salient features, thus it is well suited for multiresolution processing of the interferometric holograms of directional morphologies. An explicit representation of orthogonal Fresnel DT-CWT bases and a specific filtering method are developed. This method can effectively remove the speckle noise without destroying the salient features. Finally, the proposed reconstruction method is compared with the conventional Fresnel diffraction integration and Fresnel wavelet transform with compressive sensing methods to validate its remarkable superiority on the aspects of topography reconstruction and speckle removal.

  3. Acquisition and reconstruction conditions in silico for accurate and precise magnetic resonance elastography

    Science.gov (United States)

    Yue, Jin Long; Tardieu, Marion; Julea, Felicia; Boucneau, Tanguy; Sinkus, Ralph; Pellot-Barakat, Claire; Maître, Xavier

    2017-11-01

    Magnetic resonance elastography (MRE) is a non invasive imaging modality, which holds the promise of absolute quantification of the mechanical properties of human tissues in vivo. MRE reconstruction with algebraic inversion of the Helmholtz equation upon the curl of the shear displacement field may theoretically be flawless. However, its performances are challenged by multiple experimental parameters, especially the frequency and the amplitude of the mechanical wave, the voxel size and the signal-to-noise ratio of the MRE acquisition. A point source excitation was simulated and realistic displacement fields were analytically computed to simulate MRE data sets in an isotropic, homogeneous, linearly-elastic, and half-space infinite medium. Acquisition and reconstruction methods were challenged and the joint influence of the aforementioned parameters was studied. For a given signal-to-noise ratio, the conditions on the number of voxels per wavelength were determined for optimizing voxel-wise accuracy and precision in MRE. It was shown that, once data are acquired, the reconstruction quality could even be improved by effective interpolation or decimation so data could eventually fulfill favorable conditions for mechanical characterization of the tissue. Finally, the overall outcome, which is usually computed from the three acquired motion-encoded directions, may further be improved by appropriate averaging strategies that are based on adapted curl of shear displacement field quality-weighting.

  4. Accurate Reconstruction of the Roman Circus in Milan by Georeferencing Heterogeneous Data Sources with GIS

    Directory of Open Access Journals (Sweden)

    Gabriele Guidi

    2017-09-01

    Full Text Available This paper presents the methodological approach and the actual workflow for creating the 3D digital reconstruction in time of the ancient Roman Circus of Milan, which is presently covered completely by the urban fabric of the modern city. The diachronic reconstruction is based on a proper mix of quantitative data originated by current 3D surveys and historical sources, such as ancient maps, drawings, archaeological reports, restrictions decrees, and old photographs. When possible, such heterogeneous sources have been georeferenced and stored in a GIS system. In this way the sources have been analyzed in depth, allowing the deduction of geometrical information not explicitly revealed by the material available. A reliable reconstruction of the area in different historical periods has been therefore hypothesized. This research has been carried on in the framework of the project Cultural Heritage Through Time—CHT2, funded by the Joint Programming Initiative on Cultural Heritage (JPI-CH, supported by the Italian Ministry for Cultural Heritage (MiBACT, the Italian Ministry for University and Research (MIUR, and the European Commission.

  5. Accurate and reproducible reconstruction of coronary arteries and endothelial shear stress calculation using 3D OCT: comparative study to 3D IVUS and 3D QCA.

    Science.gov (United States)

    Toutouzas, Konstantinos; Chatzizisis, Yiannis S; Riga, Maria; Giannopoulos, Andreas; Antoniadis, Antonios P; Tu, Shengxian; Fujino, Yusuke; Mitsouras, Dimitrios; Doulaverakis, Charalampos; Tsampoulatidis, Ioannis; Koutkias, Vassilis G; Bouki, Konstantina; Li, Yingguang; Chouvarda, Ioanna; Cheimariotis, Grigorios; Maglaveras, Nicos; Kompatsiaris, Ioannis; Nakamura, Sunao; Reiber, Johan H C; Rybicki, Frank; Karvounis, Haralambos; Stefanadis, Christodoulos; Tousoulis, Dimitris; Giannoglou, George D

    2015-06-01

    Geometrically-correct 3D OCT is a new imaging modality with the potential to investigate the association of local hemodynamic microenvironment with OCT-derived high-risk features. We aimed to describe the methodology of 3D OCT and investigate the accuracy, inter- and intra-observer agreement of 3D OCT in reconstructing coronary arteries and calculating ESS, using 3D IVUS and 3D QCA as references. 35 coronary artery segments derived from 30 patients were reconstructed in 3D space using 3D OCT. 3D OCT was validated against 3D IVUS and 3D QCA. The agreement in artery reconstruction among 3D OCT, 3D IVUS and 3D QCA was assessed in 3-mm-long subsegments using lumen morphometry and ESS parameters. The inter- and intra-observer agreement of 3D OCT, 3D IVUS and 3D QCA were assessed in a representative sample of 61 subsegments (n = 5 arteries). The data processing times for each reconstruction methodology were also calculated. There was a very high agreement between 3D OCT vs. 3D IVUS and 3D OCT vs. 3D QCA in terms of total reconstructed artery length and volume, as well as in terms of segmental morphometric and ESS metrics with mean differences close to zero and narrow limits of agreement (Bland-Altman analysis). 3D OCT exhibited excellent inter- and intra-observer agreement. The analysis time with 3D OCT was significantly lower compared to 3D IVUS. Geometrically-correct 3D OCT is a feasible, accurate and reproducible 3D reconstruction technique that can perform reliable ESS calculations in coronary arteries. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    International Nuclear Information System (INIS)

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-01

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented

  7. A fast and accurate image reconstruction using GPU for OpenPET prototype

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji

    2010-01-01

    The OpenPET (positron emission tomography), which have a physically opened space between two detector rings, is our new geometry to enable PET imaging during radiation therapy if the real-time imaging system is realized. In this paper, therefore, we developed a list-mode image reconstruction method using general purpose graphic processing units (GPUs). We used the list-mode dynamic row-action maximum likelihood algorithm (DRAMA). For GPU implementation, the efficiency of acceleration depends on the implementation method which is required to avoid conditional statements. We developed a system model in which each element of system matrix is calculated as the value of detector response function (DRF) of the length between the center of a voxel and a line of response (LOR). The system model was suited to GPU implementations that enable us to calculate each element of the system matrix with reduced number of the conditional statements. We applied the developed method to a small OpenPET prototype, which was developed for a proof-of-concept. We measured the micro-Derenzo phantom placed at the gap. The results showed that the same quality of reconstructed images using GPU as using central processing unit (CPU) were achieved, and calculation speed on the GPU was 35.5 times faster than that on the CPU. (author)

  8. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants

    International Nuclear Information System (INIS)

    Schwob, C.

    2006-12-01

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm -1 ). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10 -9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α -1 = 137.03599884 (91) with a relative uncertainty of 6.7*10 -9 . The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  9. Integration of multi-modality imaging for accurate 3D reconstruction of human coronary arteries in vivo

    International Nuclear Information System (INIS)

    Giannoglou, George D.; Chatzizisis, Yiannis S.; Sianos, George; Tsikaderis, Dimitrios; Matakos, Antonis; Koutkias, Vassilios; Diamantopoulos, Panagiotis; Maglaveras, Nicos; Parcharidis, George E.; Louridas, George E.

    2006-01-01

    In conventional intravascular ultrasound (IVUS)-based three-dimensional (3D) reconstruction of human coronary arteries, IVUS images are arranged linearly generating a straight vessel volume. However, with this approach real vessel curvature is neglected. To overcome this limitation an imaging method was developed based on integration of IVUS and biplane coronary angiography (BCA). In 17 coronary arteries from nine patients, IVUS and BCA were performed. From each angiographic projection, a single end-diastolic frame was selected and in each frame the IVUS catheter was interactively detected for the extraction of 3D catheter path. Ultrasound data was obtained with a sheath-based catheter and recorded on S-VHS videotape. S-VHS data was digitized and lumen and media-adventitia contours were semi-automatically detected in end-diastolic IVUS images. Each pair of contours was aligned perpendicularly to the catheter path and rotated in space by implementing an algorithm based on Frenet-Serret rules. Lumen and media-adventitia contours were interpolated through generation of intermediate contours creating a real 3D lumen and vessel volume, respectively. The absolute orientation of the reconstructed lumen was determined by back-projecting it onto both angiographic planes and comparing the projected lumen with the actual angiographic lumen. In conclusion, our method is capable of performing rapid and accurate 3D reconstruction of human coronary arteries in vivo. This technique can be utilized for reliable plaque morphometric, geometrical and hemodynamic analyses

  10. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. The place of highly accurate methods by RNAA in metrology

    International Nuclear Information System (INIS)

    Dybczynski, R.; Danko, B.; Polkowska-Motrenko, H.; Samczynski, Z.

    2006-01-01

    With the introduction of physical metrological concepts to chemical analysis which require that the result should be accompanied by uncertainty statement written down in terms of Sl units, several researchers started to consider lD-MS as the only method fulfilling this requirement. However, recent publications revealed that in certain cases also some expert laboratories using lD-MS and analyzing the same material, produced results for which their uncertainty statements did not overlap, what theoretically should not have taken place. This shows that no monopoly is good in science and it would be desirable to widen the set of methods acknowledged as primary in inorganic trace analysis. Moreover, lD-MS cannot be used for monoisotopic elements. The need for searching for other methods having similar metrological quality as the lD-MS seems obvious. In this paper, our long-time experience on devising highly accurate ('definitive') methods by RNAA for the determination of selected trace elements in biological materials is reviewed. The general idea of definitive methods based on combination of neutron activation with the highly selective and quantitative isolation of the indicator radionuclide by column chromatography followed by gamma spectrometric measurement is reminded and illustrated by examples of the performance of such methods when determining Cd, Co, Mo, etc. lt is demonstrated that such methods are able to provide very reliable results with very low levels of uncertainty traceable to Sl units

  12. Accurate calculation of high harmonics generated by relativistic Thomson scattering

    International Nuclear Information System (INIS)

    Popa, Alexandru

    2008-01-01

    The recent emergence of the field of ultraintense laser pulses, corresponding to beam intensities higher than 10 18 W cm -2 , brings about the problem of the high harmonic generation (HHG) by the relativistic Thomson scattering of the electromagnetic radiation by free electrons. Starting from the equations of the relativistic motion of the electron in the electromagnetic field, we give an exact solution of this problem. Taking into account the Lienard-Wiechert equations, we obtain a periodic scattered electromagnetic field. Without loss of generality, the solution is strongly simplified by observing that the electromagnetic field is always normal to the direction electron-detector. The Fourier series expansion of this field leads to accurate expressions of the high harmonics generated by the Thomson scattering. Our calculations lead to a discrete HHG spectrum, whose shape and angular distribution are in agreement with the experimental data from the literature. Since no approximations were made, our approach is also valid in the ultrarelativistic regime, corresponding to intensities higher than 10 23 W cm -2 , where it predicts a strong increase of the HHG intensities and of the order of harmonics. In this domain, the nonlinear Thomson scattering could be an efficient source of hard x-rays

  13. Elongation Factor-1α Accurately Reconstructs Relationships Amongst Psyllid Families (Hemiptera: Psylloidea), with Possible Diagnostic Implications.

    Science.gov (United States)

    Martoni, Francesco; Bulman, Simon R; Pitman, Andrew; Armstrong, Karen F

    2017-12-05

    The superfamily Psylloidea (Hemiptera: Sternorrhyncha) lacks a robust multigene phylogeny. This impedes our understanding of the evolution of this group of insects and, consequently, an accurate identification of individuals, of their plant host associations, and their roles as vectors of economically important plant pathogens. The conserved nuclear gene elongation factor-1 alpha (EF-1α) has been valuable as a higher-level phylogenetic marker in insects and it has also been widely used to investigate the evolution of intron/exon structure. To explore evolutionary relationships among Psylloidea, polymerase chain reaction amplification and nucleotide sequencing of a 250-bp EF-1α gene fragment was applied to psyllids belonging to five different families. Introns were detected in three individuals belonging to two families. The nine genera belonging to the family Aphalaridae all lacked introns, highlighting the possibility of using intron presence/absence as a diagnostic tool at a family level. When paired with cytochrome oxidase I gene sequences, the 250 bp EF-1α sequence appeared to be a very promising higher-level phylogenetic marker for psyllids. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Highly accurate symplectic element based on two variational principles

    Science.gov (United States)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  15. Experimental reconstruction of a highly reflecting fiber Bragg grating by using spectral regularization and inverse scattering.

    Science.gov (United States)

    Rosenthal, Amir; Horowitz, Moshe; Kieckbusch, Sven; Brinkmeyer, Ernst

    2007-10-01

    We demonstrate experimentally, for the first time to our knowledge, a reconstruction of a highly reflecting fiber Bragg grating from its complex reflection spectrum by using a regularization algorithm. The regularization method is based on correcting the measured reflection spectrum at the Bragg zone frequencies and enables the reconstruction of the grating profile using the integral-layer-peeling algorithm. A grating with an approximately uniform profile and with a maximum reflectivity of 99.98% was accurately reconstructed by measuring only its complex reflection spectrum.

  16. 3D reconstruction of coronary arteries from 2D angiographic projections using non-uniform rational basis splines (NURBS for accurate modelling of coronary stenoses.

    Directory of Open Access Journals (Sweden)

    Francesca Galassi

    Full Text Available Assessment of coronary stenosis severity is crucial in clinical practice. This study proposes a novel method to generate 3D models of stenotic coronary arteries, directly from 2D coronary images, and suitable for immediate assessment of the stenosis severity.From multiple 2D X-ray coronary arteriogram projections, 2D vessels were extracted. A 3D centreline was reconstructed as intersection of surfaces from corresponding branches. Next, 3D luminal contours were generated in a two-step process: first, a Non-Uniform Rational B-Spline (NURBS circular contour was designed and, second, its control points were adjusted to interpolate computed 3D boundary points. Finally, a 3D surface was generated as an interpolation across the control points of the contours and used in the analysis of the severity of a lesion. To evaluate the method, we compared 3D reconstructed lesions with Optical Coherence Tomography (OCT, an invasive imaging modality that enables high-resolution endoluminal visualization of lesion anatomy.Validation was performed on routine clinical data. Analysis of paired cross-sectional area discrepancies indicated that the proposed method more closely represented OCT contours than conventional approaches in luminal surface reconstruction, with overall root-mean-square errors ranging from 0.213mm2 to 1.013mm2, and maximum error of 1.837mm2. Comparison of volume reduction due to a lesion with corresponding FFR measurement suggests that the method may help in estimating the physiological significance of a lesion.The algorithm accurately reconstructed 3D models of lesioned arteries and enabled quantitative assessment of stenoses. The proposed method has the potential to allow immediate analysis of the stenoses in clinical practice, thereby providing incremental diagnostic and prognostic information to guide treatments in real time and without the need for invasive techniques.

  17. Using machine learning and surface reconstruction to accurately differentiate different trajectories of mood and energy dysregulation in youth.

    Science.gov (United States)

    Versace, Amelia; Sharma, Vinod; Bertocci, Michele A; Bebko, Genna; Iyengar, Satish; Dwojak, Amanda; Bonar, Lisa; Perlman, Susan B; Schirda, Claudiu; Travis, Michael; Gill, Mary Kay; Diwadkar, Vaibhav A; Sunshine, Jeffrey L; Holland, Scott K; Kowatch, Robert A; Birmaher, Boris; Axelson, David; Frazier, Thomas W; Arnold, L Eugene; Fristad, Mary A; Youngstrom, Eric A; Horwitz, Sarah M; Findling, Robert L; Phillips, Mary L

    2017-01-01

    Difficulty regulating positive mood and energy is a feature that cuts across different pediatric psychiatric disorders. Yet, little is known regarding the neural mechanisms underlying different developmental trajectories of positive mood and energy regulation in youth. Recent studies indicate that machine learning techniques can help elucidate the role of neuroimaging measures in classifying individual subjects by specific symptom trajectory. Cortical thickness measures were extracted in sixty-eight anatomical regions covering the entire brain in 115 participants from the Longitudinal Assessment of Manic Symptoms (LAMS) study and 31 healthy comparison youth (12.5 y/o;-Male/Female = 15/16;-IQ = 104;-Right/Left handedness = 24/5). Using a combination of trajectories analyses, surface reconstruction, and machine learning techniques, the present study aims to identify the extent to which measures of cortical thickness can accurately distinguish youth with higher (n = 18) from those with lower (n = 34) trajectories of manic-like behaviors in a large sample of LAMS youth (n = 115; 13.6 y/o; M/F = 68/47, IQ = 100.1, R/L = 108/7). Machine learning analyses revealed that widespread cortical thickening in portions of the left dorsolateral prefrontal cortex, right inferior and middle temporal gyrus, bilateral precuneus, and bilateral paracentral gyri and cortical thinning in portions of the right dorsolateral prefrontal cortex, left ventrolateral prefrontal cortex, and right parahippocampal gyrus accurately differentiate (Area Under Curve = 0.89;p = 0.03) youth with different (higher vs lower) trajectories of positive mood and energy dysregulation over a period up to 5years, as measured by the Parent General Behavior Inventory-10 Item Mania Scale. Our findings suggest that specific patterns of cortical thickness may reflect transdiagnostic neural mechanisms associated with different temporal trajectories of positive mood and energy dysregulation in youth. This approach has

  18. Using machine learning and surface reconstruction to accurately differentiate different trajectories of mood and energy dysregulation in youth.

    Directory of Open Access Journals (Sweden)

    Amelia Versace

    Full Text Available Difficulty regulating positive mood and energy is a feature that cuts across different pediatric psychiatric disorders. Yet, little is known regarding the neural mechanisms underlying different developmental trajectories of positive mood and energy regulation in youth. Recent studies indicate that machine learning techniques can help elucidate the role of neuroimaging measures in classifying individual subjects by specific symptom trajectory. Cortical thickness measures were extracted in sixty-eight anatomical regions covering the entire brain in 115 participants from the Longitudinal Assessment of Manic Symptoms (LAMS study and 31 healthy comparison youth (12.5 y/o;-Male/Female = 15/16;-IQ = 104;-Right/Left handedness = 24/5. Using a combination of trajectories analyses, surface reconstruction, and machine learning techniques, the present study aims to identify the extent to which measures of cortical thickness can accurately distinguish youth with higher (n = 18 from those with lower (n = 34 trajectories of manic-like behaviors in a large sample of LAMS youth (n = 115; 13.6 y/o; M/F = 68/47, IQ = 100.1, R/L = 108/7. Machine learning analyses revealed that widespread cortical thickening in portions of the left dorsolateral prefrontal cortex, right inferior and middle temporal gyrus, bilateral precuneus, and bilateral paracentral gyri and cortical thinning in portions of the right dorsolateral prefrontal cortex, left ventrolateral prefrontal cortex, and right parahippocampal gyrus accurately differentiate (Area Under Curve = 0.89;p = 0.03 youth with different (higher vs lower trajectories of positive mood and energy dysregulation over a period up to 5years, as measured by the Parent General Behavior Inventory-10 Item Mania Scale. Our findings suggest that specific patterns of cortical thickness may reflect transdiagnostic neural mechanisms associated with different temporal trajectories of positive mood and energy dysregulation in youth. This

  19. Accurate masking technology for high-resolution powder blasting

    Science.gov (United States)

    Pawlowski, Anne-Gabrielle; Sayah, Abdeljalil; Gijs, Martin A. M.

    2005-07-01

    We have combined eroding 10 µm diameter Al2O3 particles with a new masking technology to realize the smallest and most accurate possible structures by powder blasting. Our masking technology is based on the sequential combination of two polymers:(i) the brittle epoxy resin SU8 for its photosensitivity and (ii) the elastic and thermocurable poly-dimethylsiloxane for its large erosion resistance. We have micropatterned various types of structures with a minimum width of 20 µm for test structures with an aspect ratio of 1, and 50 µm for test structures with an aspect ratio of 2.

  20. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth; Virta, Kristoffer

    2014-01-01

    to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions

  1. Accurate reconstruction of the jV-characteristic of organic solar cells from measurements of the external quantum efficiency

    Science.gov (United States)

    Meyer, Toni; Körner, Christian; Vandewal, Koen; Leo, Karl

    2018-04-01

    In two terminal tandem solar cells, the current density - voltage (jV) characteristic of the individual subcells is typically not directly measurable, but often required for a rigorous device characterization. In this work, we reconstruct the jV-characteristic of organic solar cells from measurements of the external quantum efficiency under applied bias voltages and illumination. We show that it is necessary to perform a bias irradiance variation at each voltage and subsequently conduct a mathematical correction of the differential to the absolute external quantum efficiency to obtain an accurate jV-characteristic. Furthermore, we show that measuring the external quantum efficiency as a function of voltage for a single bias irradiance of 0.36 AM1.5g equivalent sun provides a good approximation of the photocurrent density over voltage curve. The method is tested on a selection of efficient, common single-junctions. The obtained conclusions can easily be transferred to multi-junction devices with serially connected subcells.

  2. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    Science.gov (United States)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  3. High Resolution Reconstruction of the Ionosphere for SAR Applications

    Science.gov (United States)

    Minkwitz, David; Gerzen, Tatjana; Hoque, Mainul

    2014-05-01

    Caused by ionosphere's strong impact on radio signal propagation, high resolution and highly accurate reconstructions of the ionosphere's electron density distribution are demanded for a large number of applications, e.g. to contribute to the mitigation of ionospheric effects on Synthetic Aperture Radar (SAR) measurements. As a new generation of remote sensing satellites the TanDEM-L radar mission is planned to improve the understanding and modelling ability of global environmental processes and ecosystem change. TanDEM-L will operate in L-band with a wavelength of approximately 24 cm enabling a stronger penetration capability compared to X-band (3 cm) or C-band (5 cm). But accompanied by the lower frequency of the TanDEM-L signals the influence of the ionosphere will increase. In particular small scale irregularities of the ionosphere might lead to electron density variations within the synthetic aperture length of the TanDEM-L satellite and in turn might result into blurring and azimuth pixel shifts. Hence the quality of the radar image worsens if the ionospheric effects are not mitigated. The Helmholtz Alliance project "Remote Sensing and Earth System Dynamics" (EDA) aims in the preparation of the HGF centres and the science community for the utilisation and integration of the TanDEM-L products into the study of the Earth's system. One significant point thereby is to cope with the mentioned ionospheric effects. Therefore different strategies towards achieving this objective are pursued: the mitigation of the ionospheric effects based on the radar data itself, the mitigation based on external information like global Total Electron Content (TEC) maps or reconstructions of the ionosphere and the combination of external information and radar data. In this presentation we describe the geostatistical approach chosen to analyse the behaviour of the ionosphere and to provide a high resolution 3D electron density reconstruction. As first step the horizontal structure of

  4. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  5. The quadrant method measuring four points is as a reliable and accurate as the quadrant method in the evaluation after anatomical double-bundle ACL reconstruction.

    Science.gov (United States)

    Mochizuki, Yuta; Kaneko, Takao; Kawahara, Keisuke; Toyoda, Shinya; Kono, Norihiko; Hada, Masaru; Ikegami, Hiroyasu; Musha, Yoshiro

    2017-11-20

    The quadrant method was described by Bernard et al. and it has been widely used for postoperative evaluation of anterior cruciate ligament (ACL) reconstruction. The purpose of this research is to further develop the quadrant method measuring four points, which we named four-point quadrant method, and to compare with the quadrant method. Three-dimensional computed tomography (3D-CT) analyses were performed in 25 patients who underwent double-bundle ACL reconstruction using the outside-in technique. The four points in this study's quadrant method were defined as point1-highest, point2-deepest, point3-lowest, and point4-shallowest, in femoral tunnel position. Value of depth and height in each point was measured. Antero-medial (AM) tunnel is (depth1, height2) and postero-lateral (PL) tunnel is (depth3, height4) in this four-point quadrant method. The 3D-CT images were evaluated independently by 2 orthopaedic surgeons. A second measurement was performed by both observers after a 4-week interval. Intra- and inter-observer reliability was calculated by means of intra-class correlation coefficient (ICC). Also, the accuracy of the method was evaluated against the quadrant method. Intra-observer reliability was almost perfect for both AM and PL tunnel (ICC > 0.81). Inter-observer reliability of AM tunnel was substantial (ICC > 0.61) and that of PL tunnel was almost perfect (ICC > 0.81). The AM tunnel position was 0.13% deep, 0.58% high and PL tunnel position was 0.01% shallow, 0.13% low compared to quadrant method. The four-point quadrant method was found to have high intra- and inter-observer reliability and accuracy. This method can evaluate the tunnel position regardless of the shape and morphology of the bone tunnel aperture for use of comparison and can provide measurement that can be compared with various reconstruction methods. The four-point quadrant method of this study is considered to have clinical relevance in that it is a detailed and accurate tool for

  6. A highly accurate benchmark for reactor point kinetics with feedback

    International Nuclear Information System (INIS)

    Ganapol, B. D.; Picca, P.

    2010-10-01

    This work apply the concept of convergence acceleration, also known as extrapolation, to find the solution to the reactor kinetics equations describing nuclear reactor transients. The method features simplicity in that an approximate finite difference formulation is constructed and converged to high accuracy from knowledge of how the error term behaves. Through Rom berg extrapolation, we demonstrate its high accuracy for a variety of imposed reactivity insertions found in the literature as well as nonlinear temperature and fission product feedback. A unique feature of the proposed method, called RKE/R(om berg) algorithm, is interval bisection to ensure high accuracy. (Author)

  7. Laser interference lithography with highly accurate interferometric alignment

    NARCIS (Netherlands)

    van Soest, Frank J.; van Wolferen, Hendricus A.G.M.; Hoekstra, Hugo; de Ridder, R.M.; Worhoff, Kerstin; Lambeck, Paul

    It is shown experimentally that in laser interference lithography, by using a reference grating, respective grating layers can be positioned with high relative accuracy. A 0.001 degree angular and a few nanometers lateral resolution have been demonstrated.

  8. High spatial resolution CT image reconstruction using parallel computing

    International Nuclear Information System (INIS)

    Yin Yin; Liu Li; Sun Gongxing

    2003-01-01

    Using the PC cluster system with 16 dual CPU nodes, we accelerate the FBP and OR-OSEM reconstruction of high spatial resolution image (2048 x 2048). Based on the number of projections, we rewrite the reconstruction algorithms into parallel format and dispatch the tasks to each CPU. By parallel computing, the speedup factor is roughly equal to the number of CPUs, which can be up to about 25 times when 25 CPUs used. This technique is very suitable for real-time high spatial resolution CT image reconstruction. (authors)

  9. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  10. Highly Accurate Timestamping for Ethernet-Based Clock Synchronization

    OpenAIRE

    Loschmidt, Patrick; Exel, Reinhard; Gaderer, Georg

    2012-01-01

    It is not only for test and measurement of great importance to synchronize clocks of networked devices to timely coordinate data acquisition. In this context the seek for high accuracy in Ethernet-based clock synchronization has been significantly supported by enhancements to the Network Time Protocol (NTP) and the introduction of the Precision Time Protocol (PTP). The latter was even applied to instrumentation and measurement applications through the introduction of LXI....

  11. Highly accurate photogrammetric measurements of the Planck reflectors

    Science.gov (United States)

    Amiri Parian, Jafar; Gruen, Armin; Cozzani, Alessandro

    2017-11-01

    The Planck mission of the European Space Agency (ESA) is designed to image the anisotropies of the Cosmic Background Radiation Field over the whole sky. To achieve this aim, sophisticated reflectors are used as part of the Planck telescope receiving system. The system consists of secondary and primary reflectors which are sections of two different ellipsoids of revolution with mean diameters of 1 and 1.6 meters. Deformations of the reflectors which influence the optical parameters and the gain of receiving signals are investigated in vacuum and at very low temperatures. For this investigation, among the various high accuracy measurement techniques, photogrammetry was selected. With respect to the photogrammetric measurements, special considerations had to be taken into account in design steps, measurement arrangement and data processing to achieve very high accuracies. The determinability of additional parameters of the camera under the given network configuration, datum definition, reliability and precision issues as well as workspace limits and propagating errors from different sources are considered. We have designed an optimal photogrammetric network by heuristic simulation for the flight model of the primary and the secondary reflectors with relative precisions better than 1:1000'000 and 1:400'000 to achieve the requested accuracies. A least squares best fit ellipsoid method was developed to determine the optical parameters of the reflectors. In this paper we will report about the procedures, the network design and the results of real measurements.

  12. High accurate volume holographic correlator with 4000 parallel correlation channels

    Science.gov (United States)

    Ni, Kai; Qu, Zongyao; Cao, Liangcai; Su, Ping; He, Qingsheng; Jin, Guofan

    2008-03-01

    Volume holographic correlator allows simultaneously calculate the two-dimensional inner product between the input image and each stored image. We have recently experimentally implemented in VHC 4000 parallel correlation channels with better than 98% output accuracy in a single location in a crystal. The speckle modulation is used to suppress the sidelobes of the correlation patterns, allowing more correlation spots to be contained in the output plane. A modified exposure schedule is designed to ensure the hologram in each channel with unity diffraction efficiency. In this schedule, a restricted coefficient was introduced into the original exposure schedule to solve the problem that the sensitivity and time constant of the crystal will change as a time function when in high-capacity storage. An interleaving method is proposed to improve the output accuracy. By unifying the distribution of the input and stored image patterns without changing the inner products between them, this method could eliminate the impact of correlation pattern variety on calculated inner product values. Moreover, by using this method, the maximum correlation spot size is reduced, which decreases the required minimum safe clearance between neighboring spots in the output plane, allowing more spots to be parallely detected without crosstalk. The experimental results are given and analyzed.

  13. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  14. High-speed photography of dynamic photoelastic experiment with a highly accurate blasting machine

    Science.gov (United States)

    Katsuyama, Kunihisa; Ogata, Yuji; Wada, Yuji; Hashizume, K.

    1995-05-01

    A high accurate blasting machine which could control 1 microsecond(s) was developed. At first, explosion of a bridge wire in an electric detonator was observed and next the detonations of caps were observed with a high speed camera. It is well known that a compressed stress wave reflects at the free face, it propagates to the backward as a tensile stress wave, and cracks grow when the tensile stress becomes the dynamic tensile strength. The behavior of these cracks has been discussed through the observation of the dynamic photoelastic high speed photography and the three dimensional dynamic stress analysis.

  15. High quality digital holographic reconstruction on analog film

    Science.gov (United States)

    Nelsen, B.; Hartmann, P.

    2017-05-01

    High quality real-time digital holographic reconstruction, i.e. at 30 Hz frame rates, has been at the forefront of research and has been hailed as the holy grail of display systems. While these efforts have produced a fascinating array of computer algorithms and technology, many applications of reconstructing high quality digital holograms do not require such high frame rates. In fact, applications such as 3D holographic lithography even require a stationary mask. Typical devices used for digital hologram reconstruction are based on spatial-light-modulator technology and this technology is great for reconstructing arbitrary holograms on the fly; however, it lacks the high spatial resolution achievable by its analog counterpart, holographic film. Analog holographic film is therefore the method of choice for reconstructing highquality static holograms. The challenge lies in taking a static, high-quality digitally calculated hologram and effectively writing it to holographic film. We have developed a theoretical system based on a tunable phase plate, an intensity adjustable high-coherence laser and a slip-stick based piezo rotation stage to effectively produce a digitally calculated hologram on analog film. The configuration reproduces the individual components, both the amplitude and phase, of the hologram in the Fourier domain. These Fourier components are then individually written on the holographic film after interfering with a reference beam. The system is analogous to writing angularly multiplexed plane waves with individual component phase control.

  16. Ultra fast, accurate PET image reconstruction for the Siemens hybrid MR/BrainPET scanner using raw LOR data

    International Nuclear Information System (INIS)

    Scheins, Juergen; Lerche, Christoph; Shah, Jon

    2015-01-01

    Fast PET image reconstruction algorithms usually use a Line-of-Response (LOR) preprocessing step where the detected raw LOR data are interpolated either to evenly spaced sinogram projection bins or alternatively to a generic projection space as for example proposed by the PET Reconstruction Software Toolkit (PRESTO) [1]. In this way, speed-optimised, versatile geometrical projectors can be implemented for iterative image reconstruction independent of the underlying scanner geometry. However, all strategies of projection data interpolation unavoidably lead to a loss of original information and result in some degradation of image quality. Here, direct LOR reconstructions overcome this evident drawback at cost of a massively enhanced computational burden. Therefore, computational optimisation techniques are essential to make such demanding approaches attractive and economical for widespread usage in the clinical environment. In this paper, we demonstrate for the Siemens Hybrid MR/BrainPET with 240 million physical LORs that a very fast quantitative direct LOR reconstruction can be realized using a modified version of PRESTO. Now, PRESTO is also capable to directly use sets of symmetric physical LORs instead of interpolating LORs to a generic projection space. Exploiting basic scanner symmetries together with the technique of Single Instruction Multipe Data (SIMD) and Simultaneous Multi-Threading (SMT) results in an overall calculation time of 2-3 minutes per frame on a single multi-core machine, i.e. neither requiring a cluster of mutliple machines nor Graphics Processing Units (GPUs).

  17. Ultra fast, accurate PET image reconstruction for the Siemens hybrid MR/BrainPET scanner using raw LOR data

    Energy Technology Data Exchange (ETDEWEB)

    Scheins, Juergen; Lerche, Christoph; Shah, Jon [Forschungszentrum Jülich GmbH, Jülich (Germany)

    2015-05-18

    Fast PET image reconstruction algorithms usually use a Line-of-Response (LOR) preprocessing step where the detected raw LOR data are interpolated either to evenly spaced sinogram projection bins or alternatively to a generic projection space as for example proposed by the PET Reconstruction Software Toolkit (PRESTO) [1]. In this way, speed-optimised, versatile geometrical projectors can be implemented for iterative image reconstruction independent of the underlying scanner geometry. However, all strategies of projection data interpolation unavoidably lead to a loss of original information and result in some degradation of image quality. Here, direct LOR reconstructions overcome this evident drawback at cost of a massively enhanced computational burden. Therefore, computational optimisation techniques are essential to make such demanding approaches attractive and economical for widespread usage in the clinical environment. In this paper, we demonstrate for the Siemens Hybrid MR/BrainPET with 240 million physical LORs that a very fast quantitative direct LOR reconstruction can be realized using a modified version of PRESTO. Now, PRESTO is also capable to directly use sets of symmetric physical LORs instead of interpolating LORs to a generic projection space. Exploiting basic scanner symmetries together with the technique of Single Instruction Multipe Data (SIMD) and Simultaneous Multi-Threading (SMT) results in an overall calculation time of 2-3 minutes per frame on a single multi-core machine, i.e. neither requiring a cluster of mutliple machines nor Graphics Processing Units (GPUs).

  18. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    Science.gov (United States)

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  19. Statistical dynamic image reconstruction in state-of-the-art high-resolution PET

    International Nuclear Information System (INIS)

    Rahmim, Arman; Cheng, J-C; Blinder, Stephan; Camborde, Maurie-Laure; Sossi, Vesna

    2005-01-01

    Modern high-resolution PET is now more than ever in need of scrutiny into the nature and limitations of the imaging modality itself as well as image reconstruction techniques. In this work, we have reviewed, analysed and addressed the following three considerations within the particular context of state-of-the-art dynamic PET imaging: (i) the typical average numbers of events per line-of-response (LOR) are now (much) less than unity (ii) due to the physical and biological decay of the activity distribution, one requires robust and efficient reconstruction algorithms applicable to a wide range of statistics and (iii) the computational considerations in dynamic imaging are much enhanced (i.e., more frames to be stored and reconstructed). Within the framework of statistical image reconstruction, we have argued theoretically and shown experimentally that the sinogram non-negativity constraint (when using the delayed-coincidence and/or scatter-subtraction techniques) is especially expected to result in an overestimation bias. Subsequently, two schemes are considered: (a) subtraction techniques in which an image non-negativity constraint has been imposed and (b) implementation of random and scatter estimates inside the reconstruction algorithms, thus enabling direct processing of Poisson-distributed prompts. Both techniques are able to remove the aforementioned bias, while the latter, being better conditioned theoretically, is able to exhibit superior noise characteristics. We have also elaborated upon and verified the applicability of the accelerated list-mode image reconstruction method as a powerful solution for accurate, robust and efficient dynamic reconstructions of high-resolution data (as well as a number of additional benefits in the context of state-of-the-art PET)

  20. Coastal barrier stratigraphy for Holocene high-resolution sea-level reconstruction.

    Science.gov (United States)

    Costas, Susana; Ferreira, Óscar; Plomaritis, Theocharis A; Leorri, Eduardo

    2016-12-08

    The uncertainties surrounding present and future sea-level rise have revived the debate around sea-level changes through the deglaciation and mid- to late Holocene, from which arises a need for high-quality reconstructions of regional sea level. Here, we explore the stratigraphy of a sandy barrier to identify the best sea-level indicators and provide a new sea-level reconstruction for the central Portuguese coast over the past 6.5 ka. The selected indicators represent morphological features extracted from coastal barrier stratigraphy, beach berm and dune-beach contact. These features were mapped from high-resolution ground penetrating radar images of the subsurface and transformed into sea-level indicators through comparison with modern analogs and a chronology based on optically stimulated luminescence ages. Our reconstructions document a continuous but slow sea-level rise after 6.5 ka with an accumulated change in elevation of about 2 m. In the context of SW Europe, our results show good agreement with previous studies, including the Tagus isostatic model, with minor discrepancies that demand further improvement of regional models. This work reinforces the potential of barrier indicators to accurately reconstruct high-resolution mid- to late Holocene sea-level changes through simple approaches.

  1. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    Energy Technology Data Exchange (ETDEWEB)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu [National Radio Astronomy Observatory, Socorro, NM-87801 (United States)

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.

  2. Understanding reconstructed Dante spectra using high resolution spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    May, M. J., E-mail: may13@llnl.gov; Widmann, K.; Kemp, G. E.; Thorn, D.; Colvin, J. D.; Schneider, M. B.; Moore, A.; Blue, B. E. [L-170 Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, California 94551 (United States); Weaver, J. [Naval Research Laboratory, 4555 Overlook Ave. SW, Washington, DC 20375 (United States)

    2016-11-15

    The Dante is an 18 channel filtered diode array used at the National Ignition Facility (NIF) to measure the spectrally and temporally resolved radiation flux between 50 eV and 20 keV from various targets. The absolute flux is determined from the radiometric calibration of the x-ray diodes, filters, and mirrors and a reconstruction algorithm applied to the recorded voltages from each channel. The reconstructed spectra are very low resolution with features consistent with the instrument response and are not necessarily consistent with the spectral emission features from the plasma. Errors may exist between the reconstructed spectra and the actual emission features due to assumptions in the algorithm. Recently, a high resolution convex crystal spectrometer, VIRGIL, has been installed at NIF with the same line of sight as the Dante. Spectra from L-shell Ag and Xe have been recorded by both VIRGIL and Dante. Comparisons of these two spectroscopic measurements yield insights into the accuracy of the Dante reconstructions.

  3. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  4. Combining Public Domain and Professional Panoramic Imagery for the Accurate and Dense 3d Reconstruction of the Destroyed Bel Temple in Palmyra

    Science.gov (United States)

    Wahbeh, W.; Nebiker, S.; Fangi, G.

    2016-06-01

    This paper exploits the potential of dense multi-image 3d reconstruction of destroyed cultural heritage monuments by either using public domain touristic imagery only or by combining the public domain imagery with professional panoramic imagery. The focus of our work is placed on the reconstruction of the temple of Bel, one of the Syrian heritage monuments, which was destroyed in September 2015 by the so called "Islamic State". The great temple of Bel is considered as one of the most important religious buildings of the 1st century AD in the East with a unique design. The investigations and the reconstruction were carried out using two types of imagery. The first are freely available generic touristic photos collected from the web. The second are panoramic images captured in 2010 for documenting those monuments. In the paper we present a 3d reconstruction workflow for both types of imagery using state-of-the art dense image matching software, addressing the non-trivial challenges of combining uncalibrated public domain imagery with panoramic images with very wide base-lines. We subsequently investigate the aspects of accuracy and completeness obtainable from the public domain touristic images alone and from the combination with spherical panoramas. We furthermore discuss the challenges of co-registering the weakly connected 3d point cloud fragments resulting from the limited coverage of the touristic photos. We then describe an approach using spherical photogrammetry as a virtual topographic survey allowing the co-registration of a detailed and accurate single 3d model of the temple interior and exterior.

  5. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Energy Technology Data Exchange (ETDEWEB)

    Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es

    2010-04-07

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  6. Accurate marker-free alignment with simultaneous geometry determination and reconstruction of tilt series in electron tomography

    International Nuclear Information System (INIS)

    Winkler, Hanspeter; Taylor, Kenneth A.

    2006-01-01

    An image alignment method for electron tomography is presented which is based on cross-correlation techniques and which includes a simultaneous refinement of the tilt geometry. A coarsely aligned tilt series is iteratively refined with a procedure consisting of two steps for each cycle: area matching and subsequent geometry correction. The first step, area matching, brings into register equivalent specimen regions in all images of the tilt series. It determines four parameters of a linear two-dimensional transformation, not just translation and rotation as is done during the preceding coarse alignment with conventional methods. The refinement procedure also differs from earlier methods in that the alignment references are now computed from already aligned images by reprojection of a backprojected volume. The second step, geometry correction, refines the initially inaccurate estimates of the geometrical parameters, including the direction of the tilt axis, a tilt angle offset, and the inclination of the specimen with respect to the support film or specimen holder. The correction values serve as an indicator for the progress of the refinement. For each new iteration, the correction values are used to compute an updated set of geometry parameters by a least squares fit. Model calculations show that it is essential to refine the geometrical parameters as well as the accurate alignment of the images to obtain a faithful map of the original structure

  7. Automatic and accurate reconstruction of distal humerus contours through B-Spline fitting based on control polygon deformation.

    Science.gov (United States)

    Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A

    2014-12-01

    The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.

  8. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  9. From global to local statistical shape priors novel methods to obtain accurate reconstruction results with a limited amount of training shapes

    CERN Document Server

    Last, Carsten

    2017-01-01

    This book proposes a new approach to handle the problem of limited training data. Common approaches to cope with this problem are to model the shape variability independently across predefined segments or to allow artificial shape variations that cannot be explained through the training data, both of which have their drawbacks. The approach presented uses a local shape prior in each element of the underlying data domain and couples all local shape priors via smoothness constraints. The book provides a sound mathematical foundation in order to embed this new shape prior formulation into the well-known variational image segmentation framework. The new segmentation approach so obtained allows accurate reconstruction of even complex object classes with only a few training shapes at hand.

  10. High-speed Fourier transform profilometry for reconstructing objects having arbitrary surface colours

    International Nuclear Information System (INIS)

    Chen, Liang-Chia; Nguyen, Xuan Loc; Zhang, Fu-Hao; Lin, Tzeng-Yow

    2010-01-01

    In this paper, Fourier transform profilometry (FTP) using a colour fringe selection technique for accurate phase map reconstruction is newly proposed to overcome the limitation of FTP in measuring objects having arbitrary surface colours. The sinusoidal colour fringe pattern is encoded to form a unique colour pattern for projecting onto the object's surface, and its reflected deformed fringe image is taken using a triple-colour CCD camera and rapidly processed by the developed FTP method employing a novel band-pass filter. A new 3D vision system is capable of measuring objects with a high speed of up to 60 frames s −1 . To reconstruct the 3D profile of an object having arbitrary surface colours, an innovative strategy is developed to identify the colour channel of the detected fringe pattern with the best modulation transfer function (MTF) for retrieving accurate phase maps. The experimental results demonstrate that the system has the capability to acquire 3D maps at a high speed while the measurement accuracy of the developed method is substantially better than that of the traditional FTP method. By measuring the standard step heights in a repeatability test, it is confirmed that a maximum measured error can be controlled to less than 2.8% of the overall measuring depth range

  11. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  12. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  13. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth

    2014-12-01

    © 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.

  14. A simple highly accurate field-line mapping technique for three-dimensional Monte Carlo modeling of plasma edge transport

    International Nuclear Information System (INIS)

    Feng, Y.; Sardei, F.; Kisslinger, J.

    2005-01-01

    The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations

  15. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm.

    Science.gov (United States)

    Cui, Xiaoming; Li, Tao; Li, Xin; Zhou, Weihua

    2015-05-01

    The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0±8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Accurate RNA consensus sequencing for high-fidelity detection of transcriptional mutagenesis-induced epimutations.

    Science.gov (United States)

    Reid-Bayliss, Kate S; Loeb, Lawrence A

    2017-08-29

    Transcriptional mutagenesis (TM) due to misincorporation during RNA transcription can result in mutant RNAs, or epimutations, that generate proteins with altered properties. TM has long been hypothesized to play a role in aging, cancer, and viral and bacterial evolution. However, inadequate methodologies have limited progress in elucidating a causal association. We present a high-throughput, highly accurate RNA sequencing method to measure epimutations with single-molecule sensitivity. Accurate RNA consensus sequencing (ARC-seq) uniquely combines RNA barcoding and generation of multiple cDNA copies per RNA molecule to eliminate errors introduced during cDNA synthesis, PCR, and sequencing. The stringency of ARC-seq can be scaled to accommodate the quality of input RNAs. We apply ARC-seq to directly assess transcriptome-wide epimutations resulting from RNA polymerase mutants and oxidative stress.

  17. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  18. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan

    2012-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  19. An investigation of highly accurate and precise robotic hole measurements using non-contact devices

    Directory of Open Access Journals (Sweden)

    Usman Zahid

    2016-01-01

    Full Text Available Industrial robots arms are widely used in manufacturing industry because of their support for automation. However, in metrology, robots have had limited application due to their insufficient accuracy. Even using error compensation and calibration methods, robots are not effective for micrometre (μm level metrology. Non-contact measurement devices can potentially enable the use of robots for highly accurate metrology. However, the use of such devices on robots has not been investigated. The research work reported in this paper explores the use of different non-contact measurement devices on an industrial robot. The aim is to experimentally investigate the effects of robot movements on the accuracy and precision of measurements. The focus has been on assessing the ability to accurately measure various geometric and surface parameters of holes despite the inherent inaccuracies of industrial robot. This involves the measurement of diameter, roundness and surface roughness. The study also includes scanning of holes for measuring internal features such as start and end point of a taper. Two different non-contact measurement devices based on different technologies are investigated. Furthermore, effects of eccentricity, vibrations and thermal variations are also assessed. The research contributes towards the use of robots for highly accurate and precise robotic metrology.

  20. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  1. An accurate, fast, and scalable solver for high-frequency wave propagation

    Science.gov (United States)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  2. A protocol for generating a high-quality genome-scale metabolic reconstruction.

    Science.gov (United States)

    Thiele, Ines; Palsson, Bernhard Ø

    2010-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have been developed over the last 10 years. These reconstructions represent structured knowledge bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates a myriad of computational biological studies, including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge bases. Here we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction, as well as the common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process.

  3. A Method of Accurate Bone Tunnel Placement for Anterior Cruciate Ligament Reconstruction Based on 3-Dimensional Printing Technology: A Cadaveric Study.

    Science.gov (United States)

    Ni, Jianlong; Li, Dichen; Mao, Mao; Dang, Xiaoqian; Wang, Kunzheng; He, Jiankang; Shi, Zhibin

    2018-02-01

    To explore a method of bone tunnel placement for anterior cruciate ligament (ACL) reconstruction based on 3-dimensional (3D) printing technology and to assess its accuracy. Twenty human cadaveric knees were scanned by thin-layer computed tomography (CT). To obtain data on bones used to establish a knee joint model by computer software, customized bone anchors were installed before CT. The reference point was determined at the femoral and tibial footprint areas of the ACL. The site and direction of the bone tunnels of the femur and tibia were designed and calibrated on the knee joint model according to the reference point. The resin template was designed and printed by 3D printing. Placement of the bone tunnels was accomplished by use of templates, and the cadaveric knees were scanned again to compare the concordance of the internal opening of the bone tunnels and reference points. The twenty 3D printing templates were designed and printed successfully. CT data analysis between the planned and actual drilled tunnel positions showed mean deviations of 0.57 mm (range, 0-1.5 mm; standard deviation, 0.42 mm) at the femur and 0.58 mm (range, 0-1.5 mm; standard deviation, 0.47 mm) at the tibia. The accuracy of bone tunnel placement for ACL reconstruction in cadaveric adult knees based on 3D printing technology is high. This method can improve the accuracy of bone tunnel placement for ACL reconstruction in clinical sports medicine. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  4. Defining Allowable Physical Property Variations for High Accurate Measurements on Polymer Parts

    DEFF Research Database (Denmark)

    Mohammadi, Ali; Sonne, Mads Rostgaard; Madruga, Daniel González

    2015-01-01

    Measurement conditions and material properties have a significant impact on the dimensions of a part, especially for polymers parts. Temperature variation causes part deformations that increase the uncertainty of the measurement process. Current industrial tolerances of a few micrometres demand...... high accurate measurements in non-controlled ambient. Most of polymer parts are manufactured by injection moulding and their inspection is carried out after stabilization, around 200 hours. The overall goal of this work is to reach ±5μm in uncertainty measurements a polymer products which...

  5. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  6. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  7. Small arteries can be accurately studied in vivo, using high frequency ultrasound

    DEFF Research Database (Denmark)

    Nielsen, T H; Iversen, Helle Klingenberg; Tfelt-Hansen, P

    1993-01-01

    We have validated measurements of diameters of the superficial temporal artery and other small arteries in man with a newly developed 20 MHz ultrasound scanner with A, B and M-mode imaging. The diameter of a reference object was 1.202 mm vs. 1.205 mm as measured by stereomicroscopy (nonsignifican......-gauge plethysmography (nonsignificant). Pulsations were 4.6% in the radial artery. We conclude that high frequency ultrasound provides an accurate and reproducible measure of the diameter of small and medium sized human arteries in vivo....

  8. 1024 matrix image reconstruction: usefulness in high resolution chest CT

    International Nuclear Information System (INIS)

    Jeong, Sun Young; Chung, Myung Jin; Chong, Se Min; Sung, Yon Mi; Lee, Kyung Soo

    2006-01-01

    We tried to evaluate whether high resolution chest CT with a 1,024 matrix has a significant advantage in image quality compared to a 512 matrix. Each set of 512 and 1024 matrix high resolution chest CT scans with both 0.625 mm and 1.25 mm slice thickness were obtained from 26 patients. Seventy locations that contained twenty-four low density lesions without sharp boundary such as emphysema, and forty-six sharp linear densities such as linear fibrosis were selected; these were randomly displayed on a five mega pixel LCD monitor. All the images were masked for information concerning the matrix size and slice thickness. Two chest radiologists scored the image quality of each ar rowed lesion as follows: (1) undistinguishable, (2) poorly distinguishable, (3) fairly distinguishable, (4) well visible and (5) excellently visible. The scores were compared from the aspects of matrix size, slice thickness and the different observers by using ANOVA tests. The average and standard deviation of image quality were 3.09 (± .92) for the 0.625 mm x 512 matrix, 3.16 (± .84) for the 0.625 mm x 1024 matrix, 2.49 (± 1.02) for the 1.25 mm x 512 matrix, and 2.35 (± 1.02) for the 1.25 mm x 1024 matrix, respectively. The image quality on both matrices of the high resolution chest CT scans with a 0.625 mm slice thickness was significantly better than that on the 1.25 mm slice thickness (ρ < 0.001). However, the image quality on the 1024 matrix high resolution chest CT scans was not significantly different from that on the 512 matrix high resolution chest CT scans (ρ = 0.678). The interobserver variation between the two observers was not significant (ρ = 0.691). We think that 1024 matrix image reconstruction for high resolution chest CT may not be clinical useful

  9. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaoming, E-mail: mmayzy2008@126.com; Li, Tao, E-mail: litaofeivip@163.com; Li, Xin, E-mail: lx0803@sina.com.cn; Zhou, Weihua, E-mail: wangxue0606@gmail.com

    2015-05-15

    Highlights: • High-resolution scan mode is appropriate for imaging coronary stent. • HD-detail reconstruction algorithm is stent-dedicated kernel. • The intrastent lumen visibility also depends on stent diameter and material. - Abstract: Objective: The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Materials and methods: Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. Results: The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0 ± 8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Conclusions: Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement.

  10. A highly accurate wireless digital sun sensor based on profile detecting and detector multiplexing technologies

    Science.gov (United States)

    Wei, Minsong; Xing, Fei; You, Zheng

    2017-01-01

    The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.

  11. Towards the accurate electronic structure descriptions of typical high-constant dielectrics

    Science.gov (United States)

    Jiang, Ting-Ting; Sun, Qing-Qing; Li, Ye; Guo, Jiao-Jiao; Zhou, Peng; Ding, Shi-Jin; Zhang, David Wei

    2011-05-01

    High-constant dielectrics have gained considerable attention due to their wide applications in advanced devices, such as gate oxides in metal-oxide-semiconductor devices and insulators in high-density metal-insulator-metal capacitors. However, the theoretical investigations of these materials cannot fulfil the requirement of experimental development, especially the requirement for the accurate description of band structures. We performed first-principles calculations based on the hybrid density functionals theory to investigate several typical high-k dielectrics such as Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2. The band structures of these materials are well described within the framework of hybrid density functionals theory. The band gaps of Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2are calculated to be 8.0 eV, 5.6 eV, 6.2 eV, 7.1 eV, 5.3 eV and 5.0 eV, respectively, which are very close to the experimental values and far more accurate than those obtained by the traditional generalized gradient approximation method.

  12. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Tanel Pärnamaa

    2017-05-01

    Full Text Available High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy.

  13. Highly accurate and fast optical penetration-based silkworm gender separation system

    Science.gov (United States)

    Kamtongdee, Chakkrit; Sumriddetchkajorn, Sarun; Chanhorm, Sataporn

    2015-07-01

    Based on our research work in the last five years, this paper highlights our innovative optical sensing system that can identify and separate silkworm gender highly suitable for sericulture industry. The key idea relies on our proposed optical penetration concepts and once combined with simple image processing operations leads to high accuracy in identifying of silkworm gender. Inside the system, there are electronic and mechanical parts that assist in controlling the overall system operation, processing the optical signal, and separating the female from male silkworm pupae. With current system performance, we achieve a very highly accurate more than 95% in identifying gender of silkworm pupae with an average system operational speed of 30 silkworm pupae/minute. Three of our systems are already in operation at Thailand's Queen Sirikit Sericulture Centers.

  14. Accurate Classification of Protein Subcellular Localization from High-Throughput Microscopy Images Using Deep Learning.

    Science.gov (United States)

    Pärnamaa, Tanel; Parts, Leopold

    2017-05-05

    High-throughput microscopy of many single cells generates high-dimensional data that are far from straightforward to analyze. One important problem is automatically detecting the cellular compartment where a fluorescently-tagged protein resides, a task relatively simple for an experienced human, but difficult to automate on a computer. Here, we train an 11-layer neural network on data from mapping thousands of yeast proteins, achieving per cell localization classification accuracy of 91%, and per protein accuracy of 99% on held-out images. We confirm that low-level network features correspond to basic image characteristics, while deeper layers separate localization classes. Using this network as a feature calculator, we train standard classifiers that assign proteins to previously unseen compartments after observing only a small number of training examples. Our results are the most accurate subcellular localization classifications to date, and demonstrate the usefulness of deep learning for high-throughput microscopy. Copyright © 2017 Parnamaa and Parts.

  15. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    Science.gov (United States)

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  16. Heuristic optimization in penumbral image for high resolution reconstructed image

    International Nuclear Information System (INIS)

    Azuma, R.; Nozaki, S.; Fujioka, S.; Chen, Y. W.; Namihira, Y.

    2010-01-01

    Penumbral imaging is a technique which uses the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. The size of the penumbral image on the detector can be mathematically determined as its aperture size, object size, and magnification. Conventional reconstruction methods are very sensitive to noise. On the other hand, the heuristic reconstruction method is very tolerant of noise. However, the aperture size influences the accuracy and resolution of the reconstructed image. In this article, we propose the optimization of the aperture size for the neutron penumbral imaging.

  17. A Comprehensive Strategy for Accurate Mutation Detection of the Highly Homologous PMS2.

    Science.gov (United States)

    Li, Jianli; Dai, Hongzheng; Feng, Yanming; Tang, Jia; Chen, Stella; Tian, Xia; Gorman, Elizabeth; Schmitt, Eric S; Hansen, Terah A A; Wang, Jing; Plon, Sharon E; Zhang, Victor Wei; Wong, Lee-Jun C

    2015-09-01

    Germline mutations in the DNA mismatch repair gene PMS2 underlie the cancer susceptibility syndrome, Lynch syndrome. However, accurate molecular testing of PMS2 is complicated by a large number of highly homologous sequences. To establish a comprehensive approach for mutation detection of PMS2, we have designed a strategy combining targeted capture next-generation sequencing (NGS), multiplex ligation-dependent probe amplification, and long-range PCR followed by NGS to simultaneously detect point mutations and copy number changes of PMS2. Exonic deletions (E2 to E9, E5 to E9, E8, E10, E14, and E1 to E15), duplications (E11 to E12), and a nonsense mutation, p.S22*, were identified. Traditional multiplex ligation-dependent probe amplification and Sanger sequencing approaches cannot differentiate the origin of the exonic deletions in the 3' region when PMS2 and PMS2CL share identical sequences as a result of gene conversion. Our approach allows unambiguous identification of mutations in the active gene with a straightforward long-range-PCR/NGS method. Breakpoint analysis of multiple samples revealed that recurrent exon 14 deletions are mediated by homologous Alu sequences. Our comprehensive approach provides a reliable tool for accurate molecular analysis of genes containing multiple copies of highly homologous sequences and should improve PMS2 molecular analysis for patients with Lynch syndrome. Copyright © 2015 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  18. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    KAUST Repository

    Wu, Zedong

    2018-04-05

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is is highly accurate and efficient.

  19. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    Science.gov (United States)

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  20. Energy stable and high-order-accurate finite difference methods on staggered grids

    Science.gov (United States)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  1. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    Science.gov (United States)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  2. Bayesian reconstruction of photon interaction sequences for high-resolution PET detectors

    Energy Technology Data Exchange (ETDEWEB)

    Pratx, Guillem; Levin, Craig S [Molecular Imaging Program at Stanford, Department of Radiology, Stanford, CA (United States)], E-mail: cslevin@stanford.edu

    2009-09-07

    Realizing the full potential of high-resolution positron emission tomography (PET) systems involves accurately positioning events in which the annihilation photon deposits all its energy across multiple detector elements. Reconstructing the complete sequence of interactions of each photon provides a reliable way to select the earliest interaction because it ensures that all the interactions are consistent with one another. Bayesian estimation forms a natural framework to maximize the consistency of the sequence with the measurements while taking into account the physics of {gamma}-ray transport. An inherently statistical method, it accounts for the uncertainty in the measured energy and position of each interaction. An algorithm based on maximum a posteriori (MAP) was evaluated for computer simulations. For a high-resolution PET system based on cadmium zinc telluride detectors, 93.8% of the recorded coincidences involved at least one photon multiple-interactions event (PMIE). The MAP estimate of the first interaction was accurate for 85.2% of the single photons. This represents a two-fold reduction in the number of mispositioned events compared to minimum pair distance, a simpler yet efficient positioning method. The point-spread function of the system presented lower tails and higher peak value when MAP was used. This translated into improved image quality, which we quantified by studying contrast and spatial resolution gains.

  3. Emittance reconstruction technique for the Linac4 high energy commissioning

    CERN Document Server

    Lallement, JB; Posocco, PA

    2012-01-01

    Linac4 is a new 160 MeV linear accelerator for negative Hydrogen ions (H-) presently under construction which will replace the 50 MeV proton Linac2 as injector for the CERN proton accelerator complex. Linac4 is 80 meters long and comprises a Low Energy Beam Transport line, a 3 MeV RFQ, a MEBT, a 50 MeV DTL, a 100 MeV CCDTL and a PIMS up to 160 MeV. The commissioning of the Linac is scheduled to start in 2013. It will be divided into several steps corresponding to the commissioning of the different accelerating structures. A temporary measurement bench will be dedicated to the high energy commissioning from 30 to 100 MeV (DTL tanks 2 and 3, and CCDTL). The commissioning of the PIMS will be done using the permanent equipment installed in between the end of the Linac and the main dump. This note describes the technique we will use for reconstructing the transverse emittances and the expected results.

  4. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  5. Use of Monocrystalline Silicon as Tool Material for Highly Accurate Blanking of Thin Metal Foils

    International Nuclear Information System (INIS)

    Hildering, Sven; Engel, Ulf; Merklein, Marion

    2011-01-01

    The trend towards miniaturisation of metallic mass production components combined with increased component functionality is still unbroken. Manufacturing these components by forming and blanking offers economical and ecological advantages combined with the needed accuracy. The complexity of producing tools with geometries below 50 μm by conventional manufacturing methods becomes disproportional higher. Expensive serial finishing operations are required to achieve an adequate surface roughness combined with accurate geometry details. A novel approach for producing such tools is the use of advanced etching technologies for monocrystalline silicon that are well-established in the microsystems technology. High-precision vertical geometries with a width down to 5 μm are possible. The present study shows a novel concept using this potential for the blanking of thin copper foils with monocrystallline silicon as a tool material. A self-contained machine-tool with compact outer dimensions was designed to avoid tensile stresses in the brittle silicon punch by an accurate, careful alignment of the punch, die and metal foil. A microscopic analysis of the monocrystalline silicon punch shows appropriate properties regarding flank angle, edge geometry and surface quality for the blanking process. Using a monocrystalline silicon punch with a width of 70 μm blanking experiments on as-rolled copper foils with a thickness of 20 μm demonstrate the general applicability of this material for micro production processes.

  6. Highly accurate thickness measurement of multi-layered automotive paints using terahertz technology

    Science.gov (United States)

    Krimi, Soufiene; Klier, Jens; Jonuscheit, Joachim; von Freymann, Georg; Urbansky, Ralph; Beigang, René

    2016-07-01

    In this contribution, we present a highly accurate approach for thickness measurements of multi-layered automotive paints using terahertz time domain spectroscopy in reflection geometry. The proposed method combines the benefits of a model-based material parameters extraction method to calibrate the paint coatings, a generalized Rouard's method to simulate the terahertz radiation behavior within arbitrary thin films, and the robustness of a powerful evolutionary optimization algorithm to increase the sensitivity of the minimum thickness measurement limit. Within the framework of this work, a self-calibration model is introduced, which takes into consideration the real industrial challenges such as the effect of wet-on-wet spray in the painting process.

  7. Highly accurate thickness measurement of multi-layered automotive paints using terahertz technology

    International Nuclear Information System (INIS)

    Krimi, Soufiene; Beigang, René; Klier, Jens; Jonuscheit, Joachim; Freymann, Georg von; Urbansky, Ralph

    2016-01-01

    In this contribution, we present a highly accurate approach for thickness measurements of multi-layered automotive paints using terahertz time domain spectroscopy in reflection geometry. The proposed method combines the benefits of a model-based material parameters extraction method to calibrate the paint coatings, a generalized Rouard's method to simulate the terahertz radiation behavior within arbitrary thin films, and the robustness of a powerful evolutionary optimization algorithm to increase the sensitivity of the minimum thickness measurement limit. Within the framework of this work, a self-calibration model is introduced, which takes into consideration the real industrial challenges such as the effect of wet-on-wet spray in the painting process.

  8. Accurate and emergent applications for high precision light small aerial remote sensing system

    International Nuclear Information System (INIS)

    Pei, Liu; Yingcheng, Li; Yanli, Xue; Xiaofeng, Sun; Qingwu, Hu

    2014-01-01

    In this paper, we focus on the successful applications of accurate and emergent surveying and mapping for high precision light small aerial remote sensing system. First, the remote sensing system structure and three integrated operation modes will be introduced. It can be combined to three operation modes depending on the application requirements. Second, we describe the preliminary results of a precision validation method for POS direct orientation in 1:500 mapping. Third, it presents two fast response mapping products- regional continuous three-dimensional model and digital surface model, taking the efficiency and accuracy evaluation of the two products as an important point. The precision of both products meets the 1:2 000 topographic map accuracy specifications in Pingdingshan area. In the end, conclusions and future work are summarized

  9. Accurate and emergent applications for high precision light small aerial remote sensing system

    Science.gov (United States)

    Pei, Liu; Yingcheng, Li; Yanli, Xue; Qingwu, Hu; Xiaofeng, Sun

    2014-03-01

    In this paper, we focus on the successful applications of accurate and emergent surveying and mapping for high precision light small aerial remote sensing system. First, the remote sensing system structure and three integrated operation modes will be introduced. It can be combined to three operation modes depending on the application requirements. Second, we describe the preliminary results of a precision validation method for POS direct orientation in 1:500 mapping. Third, it presents two fast response mapping products- regional continuous three-dimensional model and digital surface model, taking the efficiency and accuracy evaluation of the two products as an important point. The precision of both products meets the 1:2 000 topographic map accuracy specifications in Pingdingshan area. In the end, conclusions and future work are summarized.

  10. An Integrated GNSS/INS/LiDAR-SLAM Positioning Method for Highly Accurate Forest Stem Mapping

    Directory of Open Access Journals (Sweden)

    Chuang Qian

    2016-12-01

    Full Text Available Forest mapping, one of the main components of performing a forest inventory, is an important driving force in the development of laser scanning. Mobile laser scanning (MLS, in which laser scanners are installed on moving platforms, has been studied as a convenient measurement method for forest mapping in the past several years. Positioning and attitude accuracies are important for forest mapping using MLS systems. Inertial Navigation Systems (INSs and Global Navigation Satellite Systems (GNSSs are typical and popular positioning and attitude sensors used in MLS systems. In forest environments, because of the loss of signal due to occlusion and severe multipath effects, the positioning accuracy of GNSS is severely degraded, and even that of GNSS/INS decreases considerably. Light Detection and Ranging (LiDAR-based Simultaneous Localization and Mapping (SLAM can achieve higher positioning accuracy in environments containing many features and is commonly implemented in GNSS-denied indoor environments. Forests are different from an indoor environment in that the GNSS signal is available to some extent in a forest. Although the positioning accuracy of GNSS/INS is reduced, estimates of heading angle and velocity can maintain high accurate even with fewer satellites. GNSS/INS and the LiDAR-based SLAM technique can be effectively integrated to form a sustainable, highly accurate positioning and mapping solution for use in forests without additional hardware costs. In this study, information such as heading angles and velocities extracted from a GNSS/INS is utilized to improve the positioning accuracy of the SLAM solution, and two information-aided SLAM methods are proposed. First, a heading angle-aided SLAM (H-aided SLAM method is proposed that supplies the heading angle from GNSS/INS to SLAM. Field test results show that the horizontal positioning accuracy of an entire trajectory of 800 m is 0.13 m and is significantly improved (by 70% compared to that

  11. Fast and accurate: high-speed metrological large-range AFM for surface and nanometrology

    Science.gov (United States)

    Dai, Gaoliang; Koenders, Ludger; Fluegge, Jens; Hemmleb, Matthias

    2018-05-01

    Low measurement speed remains a major shortcoming of the scanning probe microscopic technique. It not only leads to a low measurement throughput, but a significant measurement drift over the long measurement time needed (up to hours or even days). To overcome this challenge, PTB, the national metrology institute of Germany, has developed a high-speed metrological large-range atomic force microscope (HS Met. LR-AFM) capable of measuring speeds up to 1 mm s‑1. This paper has introduced the design concept in detail. After modelling scanning probe microscopic measurements, our results suggest that the signal spectrum of the surface to be measured is the spatial spectrum of the surface scaled by the scanning speed. The higher the scanning speed , the broader the spectrum to be measured. To realise an accurate HS Met. LR-AFM, our solution is to combine different stages/sensors synchronously in measurements, which provide a much larger spectrum area for high-speed measurement capability. Two application examples have been demonstrated. The first is a new concept called reference areal surface metrology. Using the developed HS Met. LR-AFM, surfaces are measured accurately and traceably at a speed of 500 µm s‑1 and the results are applied as a reference 3D data map of the surfaces. By correlating the reference 3D data sets and 3D data sets of tools under calibration, which are measured at the same surface, it has the potential to comprehensively characterise the tools, for instance, the spectrum properties of the tools. The investigation results of two commercial confocal microscopes are demonstrated, indicating very promising results. The second example is the calibration of a kind of 3D nano standard, which has spatially distributed landmarks, i.e. special unique features defined by 3D-coordinates. Experimental investigations confirmed that the calibration accuracy is maintained at a measurement speed of 100 µm s‑1, which improves the calibration efficiency by a

  12. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    Science.gov (United States)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  13. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.

    Science.gov (United States)

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R

    2009-07-01

    The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/

  14. Raman spectroscopy for highly accurate estimation of the age of refrigerated porcine muscle

    Science.gov (United States)

    Timinis, Constantinos; Pitris, Costas

    2016-03-01

    The high water content of meat, combined with all the nutrients it contains, make it vulnerable to spoilage at all stages of production and storage even when refrigerated at 5 °C. A non-destructive and in situ tool for meat sample testing, which could provide an accurate indication of the storage time of meat, would be very useful for the control of meat quality as well as for consumer safety. The proposed solution is based on Raman spectroscopy which is non-invasive and can be applied in situ. For the purposes of this project, 42 meat samples from 14 animals were obtained and three Raman spectra per sample were collected every two days for two weeks. The spectra were subsequently processed and the sample age was calculated using a set of linear differential equations. In addition, the samples were classified in categories corresponding to the age in 2-day steps (i.e., 0, 2, 4, 6, 8, 10, 12 or 14 days old), using linear discriminant analysis and cross-validation. Contrary to other studies, where the samples were simply grouped into two categories (higher or lower quality, suitable or unsuitable for human consumption, etc.), in this study, the age was predicted with a mean error of ~ 1 day (20%) or classified, in 2-day steps, with 100% accuracy. Although Raman spectroscopy has been used in the past for the analysis of meat samples, the proposed methodology has resulted in a prediction of the sample age far more accurately than any report in the literature.

  15. Accurate determination of high-risk coronary lesion type by multidetector cardiac computed tomography.

    Science.gov (United States)

    Alasnag, Mirvat; Umakanthan, Branavan; Foster, Gary P

    2008-07-01

    Coronary arteriography (CA) is the standard method to image coronary lesions. Multidetector cardiac computerized tomography (MDCT) provides high-resolution images of coronary arteries, allowing a noninvasive alternative to determine lesion type. To date, no studies have assessed the ability of MDCT to categorize coronary lesion types. The objective of this study was to determine the accuracy of lesion type categorization by MDCT using CA as a reference standard. Patients who underwent both MDCT and CA within 2 months of each other were enrolled. MDCT and CA images were reviewed in a blinded fashion. Lesions were categorized according to the SCAI classification system (Types I-IV). The origin, proximal and middle segments of the major arteries were analyzed. Each segment comprised a data point for comparison. Analysis was performed using the Spearman Correlation Test. Four hundred eleven segments were studied, of which 110 had lesions. The lesion distribution was as follows: 35 left anterior descending (LAD), 29 circumflex (Cx), 31 right coronary artery (RCA), 2 ramus intermedius, 8 diagonal, 4 obtuse marginal and 2 left internal mammary arteries. Correlations between MDCT and CA were significant in all major vessels (LAD, Cx, RCA) (p < 0.001). The overall correlation coefficient was 0.67. Concordance was strong for lesion Types II-IV (97%) and poor for Type I (30%). High-risk coronary lesion types can be accurately categorized by MDCT. This ability may allow MDCT to play an important noninvasive role in the planning of coronary interventions.

  16. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats.

    Science.gov (United States)

    Armour, John A L; Palla, Raquel; Zeeuwen, Patrick L J M; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies.

  17. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  18. Electro-optical system for the high speed reconstruction of computed tomography images

    International Nuclear Information System (INIS)

    Tresp, V.

    1989-01-01

    An electro-optical system for the high-speed reconstruction of computed tomography (CT) images has been built and studied. The system is capable of reconstructing high-contrast and high-resolution images at video rate (30 images per second), which is more than two orders of magnitude faster than the reconstruction rate achieved by special purpose digital computers used in commercial CT systems. The filtered back-projection algorithm which was implemented in the reconstruction system requires the filtering of all projections with a prescribed filter function. A space-integrating acousto-optical convolver, a surface acoustic wave filter and a digital finite-impulse response filter were used for this purpose and their performances were compared. The second part of the reconstruction, the back projection of the filtered projections, is computationally very expensive. An optical back projector has been built which maps the filtered projections onto the two-dimensional image space using an anamorphic lens system and a prism image rotator. The reconstructed image is viewed by a video camera, routed through a real-time image-enhancement system, and displayed on a TV monitor. The system reconstructs parallel-beam projection data, and in a modified version, is also capable of reconstructing fan-beam projection data. This extension is important since the latter are the kind of projection data actually acquired in high-speed X-ray CT scanners. The reconstruction system was tested by reconstructing precomputed projection data of phantom images. These were stored in a special purpose projection memory and transmitted to the reconstruction system as an electronic signal. In this way, a projection measurement system that acquires projections sequentially was simulated

  19. Validation of a method for accurate and highly reproducible quantification of brain dopamine transporter SPECT studies

    DEFF Research Database (Denmark)

    Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna

    2011-01-01

    In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine t...... transporter (DAT) with appropriate radioligands and SPECT and without the need for structural brain scanning....

  20. Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.

    Science.gov (United States)

    Archer, B R; Almond, P R; Wagner, L K

    1985-01-01

    A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.

  1. Continuous Digital Light Processing (cDLP): Highly Accurate Additive Manufacturing of Tissue Engineered Bone Scaffolds.

    Science.gov (United States)

    Dean, David; Jonathan, Wallace; Siblani, Ali; Wang, Martha O; Kim, Kyobum; Mikos, Antonios G; Fisher, John P

    2012-03-01

    Highly accurate rendering of the external and internal geometry of bone tissue engineering scaffolds effects fit at the defect site, loading of internal pore spaces with cells, bioreactor-delivered nutrient and growth factor circulation, and scaffold resorption. It may be necessary to render resorbable polymer scaffolds with 50 μm or less accuracy to achieve these goals. This level of accuracy is available using Continuous Digital Light processing (cDLP) which utilizes a DLP(®) (Texas Instruments, Dallas, TX) chip. One such additive manufacturing device is the envisionTEC (Ferndale, MI) Perfactory(®). To use cDLP we integrate a photo-crosslinkable polymer, a photo-initiator, and a biocompatible dye. The dye attenuates light, thereby limiting the depth of polymerization. In this study we fabricated scaffolds using the well-studied resorbable polymer, poly(propylene fumarate) (PPF), titanium dioxide (TiO(2)) as a dye, Irgacure(®) 819 (BASF [Ciba], Florham Park, NJ) as an initiator, and diethyl fumarate as a solvent to control viscosity.

  2. Development of highly accurate approximate scheme for computing the charge transfer integral

    Energy Technology Data Exchange (ETDEWEB)

    Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  3. A high quality Arabidopsis transcriptome for accurate transcript-level analysis of alternative splicing

    KAUST Repository

    Zhang, Runxuan

    2017-04-05

    Alternative splicing generates multiple transcript and protein isoforms from the same gene and thus is important in gene expression regulation. To date, RNA-sequencing (RNA-seq) is the standard method for quantifying changes in alternative splicing on a genome-wide scale. Understanding the current limitations of RNA-seq is crucial for reliable analysis and the lack of high quality, comprehensive transcriptomes for most species, including model organisms such as Arabidopsis, is a major constraint in accurate quantification of transcript isoforms. To address this, we designed a novel pipeline with stringent filters and assembled a comprehensive Reference Transcript Dataset for Arabidopsis (AtRTD2) containing 82,190 non-redundant transcripts from 34 212 genes. Extensive experimental validation showed that AtRTD2 and its modified version, AtRTD2-QUASI, for use in Quantification of Alternatively Spliced Isoforms, outperform other available transcriptomes in RNA-seq analysis. This strategy can be implemented in other species to build a pipeline for transcript-level expression and alternative splicing analyses.

  4. A high quality Arabidopsis transcriptome for accurate transcript-level analysis of alternative splicing

    KAUST Repository

    Zhang, Runxuan; Calixto, Cristiane  P.  G.; Marquez, Yamile; Venhuizen, Peter; Tzioutziou, Nikoleta A.; Guo, Wenbin; Spensley, Mark; Entizne, Juan Carlos; Lewandowska, Dominika; ten  Have, Sara; Frei  dit  Frey, Nicolas; Hirt, Heribert; James, Allan B.; Nimmo, Hugh G.; Barta, Andrea; Kalyna, Maria; Brown, John  W.  S.

    2017-01-01

    Alternative splicing generates multiple transcript and protein isoforms from the same gene and thus is important in gene expression regulation. To date, RNA-sequencing (RNA-seq) is the standard method for quantifying changes in alternative splicing on a genome-wide scale. Understanding the current limitations of RNA-seq is crucial for reliable analysis and the lack of high quality, comprehensive transcriptomes for most species, including model organisms such as Arabidopsis, is a major constraint in accurate quantification of transcript isoforms. To address this, we designed a novel pipeline with stringent filters and assembled a comprehensive Reference Transcript Dataset for Arabidopsis (AtRTD2) containing 82,190 non-redundant transcripts from 34 212 genes. Extensive experimental validation showed that AtRTD2 and its modified version, AtRTD2-QUASI, for use in Quantification of Alternatively Spliced Isoforms, outperform other available transcriptomes in RNA-seq analysis. This strategy can be implemented in other species to build a pipeline for transcript-level expression and alternative splicing analyses.

  5. Achieving accurate simulations of urban impacts on ozone at high resolution

    International Nuclear Information System (INIS)

    Li, J; Georgescu, M; Mahalov, A; Moustaoui, M; Hyde, P

    2014-01-01

    The effects of urbanization on ozone levels have been widely investigated over cities primarily located in temperate and/or humid regions. In this study, nested WRF-Chem simulations with a finest grid resolution of 1 km are conducted to investigate ozone concentrations [O 3 ] due to urbanization within cities in arid/semi-arid environments. First, a method based on a shape preserving Monotonic Cubic Interpolation (MCI) is developed and used to downscale anthropogenic emissions from the 4 km resolution 2005 National Emissions Inventory (NEI05) to the finest model resolution of 1 km. Using the rapidly expanding Phoenix metropolitan region as the area of focus, we demonstrate the proposed MCI method achieves ozone simulation results with appreciably improved correspondence to observations relative to the default interpolation method of the WRF-Chem system. Next, two additional sets of experiments are conducted, with the recommended MCI approach, to examine impacts of urbanization on ozone production: (1) the urban land cover is included (i.e., urbanization experiments) and, (2) the urban land cover is replaced with the region’s native shrubland. Impacts due to the presence of the built environment on [O 3 ] are highly heterogeneous across the metropolitan area. Increased near surface [O 3 ] due to urbanization of 10–20 ppb is predominantly a nighttime phenomenon while simulated impacts during daytime are negligible. Urbanization narrows the daily [O 3 ] range (by virtue of increasing nighttime minima), an impact largely due to the region’s urban heat island. Our results demonstrate the importance of the MCI method for accurate representation of the diurnal profile of ozone, and highlight its utility for high-resolution air quality simulations for urban areas. (letter)

  6. Highly accurate determination of relative gamma-ray detection efficiency for Ge detector and its application

    International Nuclear Information System (INIS)

    Miyahara, H.; Mori, C.; Fleming, R.F.; Dewaraja, Y.K.

    1997-01-01

    When quantitative measurements of γ-rays using High-Purity Ge (HPGe) detectors are made for a variety of applications, accurate knowledge of oy-ray detection efficiency is required. The emission rates of γ-rays from sources can be determined quickly in the case that the absolute peak efficiency is calibrated. On the other hand, the relative peak efficiencies can be used for determination of intensity ratios for plural samples and for comparison to the standard source. Thus, both absolute and relative detection efficiencies are important in use of γ-ray detector. The objective of this work is to determine the relative gamma-ray peak detection efficiency for an HPGe detector with the uncertainty approaching 0.1% . We used some nuclides which emit at least two gamma-rays with energies from 700 to 2400 keV for which the relative emission probabilities are known with uncertainties much smaller than 0.1%. The relative peak detection efficiencies were calculated from the measurements of the nuclides, 46 Sc, 48 Sc, 60 Co and 94 Nb, emitting two γ- rays with the emission probabilities of almost unity. It is important that various corrections for the emission probabilities, the cascade summing effect, and the self-absorption are small. A third order polynomial function on both logarithmic scales of energy and efficiency was fitted to the data, and the peak efficiency predicted at certain energy from covariance matrix showed the uncertainty less than 0.5% except for near 700 keV. As an application, the emission probabilities of the 1037.5 and 1212.9 keV γ-rays for 48 Sc were determined using the function of the highly precise relative peak efficiency. Those were 0.9777+0,.00079 and 0.02345+0.00017 for the 1037.5 and 1212.9 keV γ-rays, respectively. The sum of these probabilities is close to unity within the uncertainty which means that the certainties of the results are high and the accuracy has been improved considerably

  7. In-depth glycoproteomic characterization of γ-conglutin by high-resolution accurate mass spectrometry.

    Directory of Open Access Journals (Sweden)

    Silvia Schiarea

    Full Text Available The molecular characterization of bioactive food components is necessary for understanding the mechanisms of their beneficial or detrimental effects on human health. This study focused on γ-conglutin, a well-known lupin seed N-glycoprotein with health-promoting properties and controversial allergenic potential. Given the importance of N-glycosylation for the functional and structural characteristics of proteins, we studied the purified protein by a mass spectrometry-based glycoproteomic approach able to identify the structure, micro-heterogeneity and attachment site of the bound N-glycan(s, and to provide extensive coverage of the protein sequence. The peptide/N-glycopeptide mixtures generated by enzymatic digestion (with or without N-deglycosylation were analyzed by high-resolution accurate mass liquid chromatography-multi-stage mass spectrometry. The four main micro-heterogeneous variants of the single N-glycan bound to γ-conglutin were identified as Man2(Xyl (Fuc GlcNAc2, Man3(Xyl (Fuc GlcNAc2, GlcNAcMan3(Xyl (Fuc GlcNAc2 and GlcNAc 2Man3(Xyl (Fuc GlcNAc2. These carry both core β1,2-xylose and core α1-3-fucose (well known Cross-Reactive Carbohydrate Determinants, but corresponding fucose-free variants were also identified as minor components. The N-glycan was proven to reside on Asn131, one of the two potential N-glycosylation sites. The extensive coverage of the γ-conglutin amino acid sequence suggested three alternative N-termini of the small subunit, that were later confirmed by direct-infusion Orbitrap mass spectrometry analysis of the intact subunit.

  8. High-resolution reconstruction of a coastal barrier system

    DEFF Research Database (Denmark)

    Fruergaard, Mikkel; Andersen, Thorbjørn Joest; Nielsen, Lars Henrik

    2015-01-01

    This study presents a detailed reconstruction of the sedimentary effects of Holocene sea-level rise on a modern coastal barrier system (CBS). Increasing concern over the evolution of CBSs due to future accelerated rates of sea-level rise calls for a better understanding of coastal barriers response...... from retreat of the barrier island and probably also due to formation of a tidal inlet close to the study area. Continued transgression and shoreface retreat created a distinct hiatus and wave ravinement surface in the seaward part of the CBS before the barrier shoreline stabilised between 5.0 and 4...

  9. Histamine quantification in human plasma using high resolution accurate mass LC-MS technology.

    Science.gov (United States)

    Laurichesse, Mathieu; Gicquel, Thomas; Moreau, Caroline; Tribut, Olivier; Tarte, Karin; Morel, Isabelle; Bendavid, Claude; Amé-Thomas, Patricia

    2016-01-01

    Histamine (HA) is a small amine playing an important role in anaphylactic reactions. In order to identify and quantify HA in plasma matrix, different methods have been developed but present several disadvantages. Here, we developed an alternative method using liquid chromatography coupled with an ultra-high resolution and accurate mass instrument, Q Exactive™ (Thermo Fisher) (LCHRMS). The method includes a protein precipitation of plasma samples spiked with HA-d4 as internal standard (IS). LC separation was performed on a C18 Accucore column (100∗2.1mm, 2.6μm) using a mobile phase containing nonafluoropentanoic acid (3nM) and acetonitrile with 0.1% (v/v) formic acid on gradient mode. Separation of analytes was obtained within 10min. Analysis was performed from full scan mode and targeted MS2 mode using a 5ppm mass window. Ion transitions monitored for targeted MS2 mode were 112.0869>95.0607m/z for HA and 116.1120>99.0855m/z for HA-d4. Calibration curves were obtained by adding standard calibration dilution at 1 to 180nM in TrisBSA. Elution of HA and IS occurred at 4.1min. The method was validated over a range of concentrations from 1nM to 100nM. The intra- and inter-run precisions were <15% for quality controls. Human plasma samples from 30 patients were analyzed by LCHRMS, and the results were highly correlated with those obtained using the gold standard radioimmunoassay (RIA) method. Overall, we demonstrate here that LCHRMS is a sensitive method for histamine quantification in biological human plasmas, suitable for routine use in medical laboratories. In addition, LCHRMS is less time-consuming than RIA, avoids the use of radioactivity, and could then be considered as an alternative quantitative method. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  10. Accurate gamma and MeV-electron track reconstruction with an ultra-low diffusion Xenon/TMA TPC at 10 atmospheres

    CERN Document Server

    González-Díaz, Diego; Borges, F.I.G.; Camargo, M.; Cárcel, S.; Cebrián, S.; Cervera, A.; Conde, C.A.N.; Dafni, T.; Díaz, J.; Esteve, R.; Fernandes, L.M.P.; Ferrario, P.; Ferreira, A.L.; Freitas, E.D.C.; Gehman, V.M.; Goldschmidt, A.; Gómez-Cadenas, J.J.; Gutiérrez, R.M.; Hauptman, J.; Hernando Morata, J.A.; Herrera, D.C.; Irastorza, I.G.; Labarga, L.; Laing, A.; Liubarsky, I.; Lopez-March, N.; Lorca, D.; Losada, M.; Luzón, G.; Marí, A.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; Miller, T.; Monrabal, F.; Monserrate, M.; Monteiro, C.M.B.; Mora, F.J.; Moutinho, L.M.; Muñoz Vidal, J.; Nebot-Guinot, M.; Nygren, D.; Oliveira, C.A.B.; Pérez, J.; Pérez Aparicio, J.L.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Santos, F.P.; dos Santos, J.M.F.; Serra, L.; Shuman, D.; Simón, A.; Sofka, C.; Sorel, M.; Toledo, J.F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J.F.C.A.; Villar, J.A.; Webb, R.; White, J.T.; Yahlali, N.; Azevedo, C.; Aznar, F.; Calvet, D.; Castel, J.; Ferrer-Ribas, E.; García, J.A.; Giomataris, I.; Gómez, H.; Iguaz, F.J.; Lagraba, A.; Le Coguie, A.; Mols, J.P.; Şahin, Ö.; Rodríguez, A.; Ruiz-Choliz, E.; Segui, L.; Tomás, A.; Veenhof, R.

    2015-01-01

    We report the performance of a 10 atm Xenon/trimethylamine time projection chamber (TPC) for the detection of X-rays (30 keV) and gamma-rays (0.511-1.275 MeV) in conjunction with the accurate tracking of the associated electrons. When operated at such a high pressure and in 1%-admixtures, trimethylamine (TMA) endows Xenon with an extremely low electron diffusion (1.3 +-0.13 mm-sigma (longitudinal), 0.8 +-0.15 mm-sigma (transverse) along 1 m drift) besides forming a convenient Penning-Fluorescent mixture. The TPC, that houses 1.1 kg of gas in its active volume, operated continuously for 100 live-days in charge amplification mode. The readout was performed through the recently introduced microbulk Micromegas technology and the AFTER chip, providing a 3D voxelization of 8mm x 8mm x 1.2mm for approximately 10 cm/MeV-long electron tracks. This work was developed as part of the R&D program of the NEXT collaboration for future detector upgrades in the search of the 0bbnu decay in 136Xe, specifically those based ...

  11. Accurate inference of shoot biomass from high-throughput images of cereal plants

    Directory of Open Access Journals (Sweden)

    Tester Mark

    2011-02-01

    Full Text Available Abstract With the establishment of advanced technology facilities for high throughput plant phenotyping, the problem of estimating plant biomass of individual plants from their two dimensional images is becoming increasingly important. The approach predominantly cited in literature is to estimate the biomass of a plant as a linear function of the projected shoot area of plants in the images. However, the estimation error from this model, which is solely a function of projected shoot area, is large, prohibiting accurate estimation of the biomass of plants, particularly for the salt-stressed plants. In this paper, we propose a method based on plant specific weight for improving the accuracy of the linear model and reducing the estimation bias (the difference between actual shoot dry weight and the value of the shoot dry weight estimated with a predictive model. For the proposed method in this study, we modeled the plant shoot dry weight as a function of plant area and plant age. The data used for developing our model and comparing the results with the linear model were collected from a completely randomized block design experiment. A total of 320 plants from two bread wheat varieties were grown in a supported hydroponics system in a greenhouse. The plants were exposed to two levels of hydroponic salt treatments (NaCl at 0 and 100 mM for 6 weeks. Five harvests were carried out. Each time 64 randomly selected plants were imaged and then harvested to measure the shoot fresh weight and shoot dry weight. The results of statistical analysis showed that with our proposed method, most of the observed variance can be explained, and moreover only a small difference between actual and estimated shoot dry weight was obtained. The low estimation bias indicates that our proposed method can be used to estimate biomass of individual plants regardless of what variety the plant is and what salt treatment has been applied. We validated this model on an independent

  12. Avulsion research using flume experiments and highly accurate and temporal-rich SfM datasets

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Vitti, A.

    2017-12-01

    SfM's ability to produce high-quality, large-scale digital elevation models (DEMs) of complicated and rapidly evolving systems has made it a valuable technique for low-budget researchers and practitioners. While SfM has provided valuable datasets that capture single-flood event DEMs, there is an increasing scientific need to capture higher temporal resolution datasets that can quantify the evolutionary processes instead of pre- and post-flood snapshots. However, flood events' dangerous field conditions and image matching challenges (e.g. wind, rain) prevent quality SfM-image acquisition. Conversely, flume experiments offer opportunities to document flood events, but achieving consistent and accurate DEMs to detect subtle changes in dry and inundated areas remains a challenge for SfM (e.g. parabolic error signatures).This research aimed at investigating the impact of naturally occurring and manipulated avulsions on braided river morphology and on the encroachment of floodplain vegetation, using laboratory experiments. This required DEMs with millimeter accuracy and precision and at a temporal resolution to capture the processes. SfM was chosen as it offered the most practical method. Through redundant local network design and a meticulous ground control point (GCP) survey with a Leica Total Station in red laser configuration (reported 2 mm accuracy), the SfM residual errors compared to separate ground truthing data produced mean errors of 1.5 mm (accuracy) and standard deviations of 1.4 mm (precision) without parabolic error signatures. Lighting conditions in the flume were limited to uniform, oblique, and filtered LED strips, which removed glint and thus improved bed elevation mean errors to 4 mm, but errors were further reduced by means of an open source software for refraction correction. The obtained datasets have provided the ability to quantify how small flood events with avulsion can have similar morphologic and vegetation impacts as large flood events

  13. Jet reconstruction at high-energy electron-positron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Boronat, M.; Fuster, J.; Garcia, I.; Vos, M. [IFIC (CSIC/UVEG), Valencia (Spain); Roloff, P.; Simoniello, R. [CERN, Geneva (Switzerland)

    2018-02-15

    In this paper we study the performance in e{sup +}e{sup -} collisions of classical e{sup +}e{sup -} jet reconstruction algorithms, longitudinally invariant algorithms and the recently proposed Valencia algorithm. The study includes a comparison of perturbative and non-perturbative jet energy corrections and the response under realistic background conditions. Several algorithms are benchmarked with a detailed detector simulation at √(s) = 3 TeV. We find that the classical e{sup +}e{sup -} algorithms, with or without beam jets, have the best response, but they are inadequate in environments with non-negligible background. The Valencia algorithm and longitudinally invariant k{sub t} algorithms have a much more robust performance, with a slight advantage for the former. (orig.)

  14. 3D High Resolution l1-SPIRiT Reconstruction on Gadgetron based Cloud

    DEFF Research Database (Denmark)

    Xue, Hui; Kelmann, Peter; Inati, Souheil

    framework to support distributed computing in a cloud environment. This extension is named GT-Plus. A cloud version of 3D l1-SPIRiT was implemented on the GT-Plus framework. We demonstrate that a 3mins reconstruction could be achieved for 1mm3 isotropic resolution neuro scans with significantly improved......Applying non-linear reconstruction to high resolution 3D MRI is challenging because of the lengthy computing time needed for those iterative algorithms. To achieve practical processing duration to enable clinical usage of non-linear reconstruction, we have extended previously published Gadgetron...

  15. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    Science.gov (United States)

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  16. A highly accurate algorithm for the solution of the point kinetics equations

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2013-01-01

    Highlights: • Point kinetics equations for nuclear reactor transient analysis are numerically solved to extreme accuracy. • Results for classic benchmarks found in the literature are given to 9-digit accuracy. • Recent results of claimed accuracy are shown to be less accurate than claimed. • Arguably brings a chapter of numerical evaluation of the PKEs to a close. - Abstract: Attempts to resolve the point kinetics equations (PKEs) describing nuclear reactor transients have been the subject of numerous articles and texts over the past 50 years. Some very innovative methods, such as the RTS (Reactor Transient Simulation) and CAC (Continuous Analytical Continuation) methods of G.R. Keepin and J. Vigil respectively, have been shown to be exceptionally useful. Recently however, several authors have developed methods they consider accurate without a clear basis for their assertion. In response, this presentation will establish a definitive set of benchmarks to enable those developing PKE methods to truthfully assess the degree of accuracy of their methods. Then, with these benchmarks, two recently published methods, found in this journal will be shown to be less accurate than claimed and a legacy method from 1984 will be confirmed

  17. A highly accurate positioning and orientation system based on the usage of four-cluster fibre optic gyros

    International Nuclear Information System (INIS)

    Zhang, Xiaoyue; Lin, Zhili; Zhang, Chunxi

    2013-01-01

    A highly accurate positioning and orientation technique based on four-cluster fibre optic gyros (FOGs) is presented. The four-cluster FOG inertial measurement unit (IMU) comprises three low-precision FOGs, one static high-precision FOG and three accelerometers. To realize high-precision positioning and orientation, the static alignment (north-seeking) before vehicle manoeuvre was divided into a low-precision self-alignment phase and a high-precision north-seeking (online calibration) phase. The high-precision FOG measurement information was introduced to obtain high-precision azimuth alignment (north-seeking) result and achieve online calibration of the low-precision three-cluster FOG. The results of semi-physical simulation were presented to validate the availability and utility of the highly accurate positioning and orientation technique based on the four-cluster FOGs. (paper)

  18. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data

    International Nuclear Information System (INIS)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-01-01

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy. (paper)

  19. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    Science.gov (United States)

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  20. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner

    DEFF Research Database (Denmark)

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan

    2009-01-01

    MOTIVATION: The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary...... determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than...

  1. Accurate modeling of high frequency microelectromechanical systems (MEMS switches in time- and frequency-domainc

    Directory of Open Access Journals (Sweden)

    F. Coccetti

    2003-01-01

    Full Text Available In this contribution we present an accurate investigation of three different techniques for the modeling of complex planar circuits. The em analysis is performed by means of different electromagnetic full-wave solvers in the timedomain and in the frequency-domain. The first one is the Transmission Line Matrix (TLM method. In the second one the TLM method is combined with the Integral Equation (IE method. The latter is based on the Generalized Transverse Resonance Diffraction (GTRD. In order to test the methods we model different structures and compare the calculated Sparameters to measured results, with good agreement.

  2. Highly Accurate Derivatives for LCL-Filtered Grid Converter with Capacitor Voltage Active Damping

    DEFF Research Database (Denmark)

    Xin, Zhen; Loh, Poh Chiang; Wang, Xiongfei

    2016-01-01

    The middle capacitor voltage of an LCL-filter, if fed back for synchronization, can be used for active damping. An extra sensor for measuring the capacitor current is then avoided. Relating the capacitor voltage to existing popular damping techniques designed with capacitor current feedback would...... are then proposed, based on either second-order or non-ideal generalized integrator. Performances of these derivatives have been found to match the ideal “s” function closely. Active damping based on capacitor voltage feedback can therefore be realized accurately. Experimental results presented have verified...

  3. Highly accurate potential calculations for cylindrically symmetric geometries using multi-region FDM: A review

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, David, E-mail: dej@kingcon.com [IJL Research Center, Newark, VT 05871 (United States)

    2011-07-21

    This paper is a review of multi-region FDM, a numerical technique for accurately determining electrostatic potentials in cylindrically symmetric geometries. Multi-region FDM can be thought of as the union of various individual elements: a single region FDM process: a method for algorithmic development; a method for auto creating a multi-region structure; the process for the relaxation of multi-region structures. Each element will be briefly described along with its integration into the multi-region relaxation process itself.

  4. The determination of the pressure-viscosity coefficient of a lubricant through an accurate film thickness formula and accurate film thickness measurements : part 2 : high L values

    NARCIS (Netherlands)

    Leeuwen, van H.J.

    2011-01-01

    The pressure-viscosity coefficient of a traction fluid is determined by fitting calculation results on accurate film thickness measurements, obtained at different speeds, loads, and temperatures. Through experiments, covering a range of 5.6

  5. arXiv Energy Reconstruction of Hadrons in highly granular combined ECAL and HCAL systems

    CERN Document Server

    Israeli, Yasmine

    2018-05-03

    This paper discusses the hadronic energy reconstruction of two combined electromagnetic and hadronic calorimeter systems using physics prototypes of the CALICE collaboration: the silicon-tungsten electromagnetic calorimeter (Si-W ECAL) and the scintillator-SiPM based analog hadron calorimeter (AHCAL); and the scintillator-tungsten electromagnetic calorimeter (ScECAL) and the AHCAL. These systems were operated in hadron beams at CERN and FNAL, permitting the study of the performance in combined ECAL and HCAL systems. Two techniques for the energy reconstruction are used, a standard reconstruction based on calibrated sub-detector energy sums, and one based on a software compensation algorithm making use of the local energy density information provided by the high granularity of the detectors. The software compensation-based algorithm improves the hadronic energy resolution by up to 30% compared to the standard reconstruction. The combined system data show comparable energy resolutions to the one achieved for da...

  6. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    Science.gov (United States)

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  7. Energy-switching potential energy surface for the water molecule revisited: A highly accurate singled-sheeted form.

    Science.gov (United States)

    Galvão, B R L; Rodrigues, S P J; Varandas, A J C

    2008-07-28

    A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.

  8. A Highly Accurate and Efficient Analytical Approach to Bridge Deck Free Vibration Analysis

    Directory of Open Access Journals (Sweden)

    D.J. Gorman

    2000-01-01

    Full Text Available The superposition method is employed to obtain an accurate analytical type solution for the free vibration frequencies and mode shapes of multi-span bridge decks. Free edge conditions are imposed on the long edges running in the direction of the deck. Inter-span support is of the simple (knife-edge type. The analysis is valid regardless of the number of spans or their individual lengths. Exact agreement is found when computed results are compared with known eigenvalues for bridge decks with all spans of equal length. Mode shapes and eigenvalues are presented for typical bridge decks of three and four span lengths. In each case torsional and non-torsional modes are studied.

  9. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  10. High channel count microphone array accurately and precisely localizes ultrasonic signals from freely-moving mice.

    Science.gov (United States)

    Warren, Megan R; Sangiamo, Daniel T; Neunuebel, Joshua P

    2018-03-01

    An integral component in the assessment of vocal behavior in groups of freely interacting animals is the ability to determine which animal is producing each vocal signal. This process is facilitated by using microphone arrays with multiple channels. Here, we made important refinements to a state-of-the-art microphone array based system used to localize vocal signals produced by freely interacting laboratory mice. Key changes to the system included increasing the number of microphones as well as refining the methodology for localizing and assigning vocal signals to individual mice. We systematically demonstrate that the improvements in the methodology for localizing mouse vocal signals led to an increase in the number of signals detected as well as the number of signals accurately assigned to an animal. These changes facilitated the acquisition of larger and more comprehensive data sets that better represent the vocal activity within an experiment. Furthermore, this system will allow more thorough analyses of the role that vocal signals play in social communication. We expect that such advances will broaden our understanding of social communication deficits in mouse models of neurological disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  12. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  13. Isotope specific resolution recovery image reconstruction in high resolution PET imaging.

    Science.gov (United States)

    Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib

    2014-05-01

    Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The

  14. System architecture for high speed reconstruction in time-of-flight positron tomography

    International Nuclear Information System (INIS)

    Campagnolo, R.E.; Bouvier, A.; Chabanas, L.; Robert, C.

    1985-06-01

    A new generation of Time Of Flight (TOF) positron tomograph with high resolution and high count rate capabilities is under development in our group. After a short recall of the data acquisition process and image reconstruction in a TOF PET camera, we present the data acquisition system which achieves a data transfer rate of 0.8 mega events per second or more if necessary in list mode. We describe the reconstruction process based on a five stages pipe line architecture using home made processors. The expected performance with this architecture is a time reconstruction of six seconds per image (256x256 pixels) of one million events. This time could be reduce to 4 seconds. We conclude with the future developments of the system

  15. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using...

  16. Highly accurate prediction of food challenge outcome using routinely available clinical data.

    Science.gov (United States)

    DunnGalvin, Audrey; Daly, Deirdre; Cullinane, Claire; Stenke, Emily; Keeton, Diane; Erlewyn-Lajeunesse, Mich; Roberts, Graham C; Lucas, Jane; Hourihane, Jonathan O'B

    2011-03-01

    Serum specific IgE or skin prick tests are less useful at levels below accepted decision points. We sought to develop and validate a model to predict food challenge outcome by using routinely collected data in a diverse sample of children considered suitable for food challenge. The proto-algorithm was generated by using a limited data set from 1 service (phase 1). We retrospectively applied, evaluated, and modified the initial model by using an extended data set in another center (phase 2). Finally, we prospectively validated the model in a blind study in a further group of children undergoing food challenge for peanut, milk, or egg in the second center (phase 3). Allergen-specific models were developed for peanut, egg, and milk. Phase 1 (N = 429) identified 5 clinical factors associated with diagnosis of food allergy by food challenge. In phase 2 (N = 289), we examined the predictive ability of 6 clinical factors: skin prick test, serum specific IgE, total IgE minus serum specific IgE, symptoms, sex, and age. In phase 3 (N = 70), 97% of cases were accurately predicted as positive and 94% as negative. Our model showed an advantage in clinical prediction compared with serum specific IgE only, skin prick test only, and serum specific IgE and skin prick test (92% accuracy vs 57%, and 81%, respectively). Our findings have implications for the improved delivery of food allergy-related health care, enhanced food allergy-related quality of life, and economized use of health service resources by decreasing the number of food challenges performed. Copyright © 2011 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  17. An accurate energy-range relationship for high-energy electron beams in arbitrary materials

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Brahme, A.

    1994-01-01

    A general analytical energy-range relationship has been derived to relate the practical range, R p to the most probable energy, E p , of incident electron beams in the range 1 to 50 MeV and above, for absorbers of any atomic number. In the present study only Monte Carlo data determined with the new ITS.3 code have been employed. The standard deviations of the mean deviation from the Monte Carlo data at any energy are about 0.10, 0.12, 0.04, 0.11, 0.04, 0.03, 0.02 mm for Be, C, H 2 O, Al, Cu, Ag and U, respectively, and the relative standard deviation of the mean is about 0.5% for all materials. The fitting program gives some priority to water-equivalent materials, which explains the low standard deviation for water. A small error in the fall-off slope can give a different value for R p . We describe a new method which reduces the uncertainty in the R p determination, by fitting an odd function to the descending portion of the depth-dose curve in order to accurately determine the tangent at the inflection point, and thereby the practical range. An approximate inverse relation is given expressing the most probable energy of an electron beam as a function of the practical range. The resultant relative standard error of the energy is less than 0.7%, and the maximum energy error ΔE p is less than 0.3 MeV. (author)

  18. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    Science.gov (United States)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  19. Reconstructing the highly virulent Classical Swine Fever Virus strain Koslov

    DEFF Research Database (Denmark)

    Fahnøe, Ulrik; Pedersen, Anders Gorm; Nielsen, Jens

    -prone nature of the RNA-dependent RNA polymerase resulting in the majority of circulating forms being non-functional. However, since any infectious virus particle should necessarily be the offspring of a functional virus, we hypothesized that it should be possible to synthesize a highly virulent form...

  20. Three-dimensional reconstruction and modeling of middle ear biomechanics by high-resolution computed tomography and finite element analysis.

    Science.gov (United States)

    Lee, Chia-Fone; Chen, Peir-Rong; Lee, Wen-Jeng; Chen, Jyh-Horng; Liu, Tien-Chen

    2006-05-01

    To present a systematic and practical approach that uses high-resolution computed tomography to derive models of the middle ear for finite element analysis. This prospective study included 31 subjects with normal hearing and no previous otologic disorders. Temporal bone images obtained from 15 right ears and 16 left ears were used for evaluation and reconstruction. High-resolution computed tomography of temporal bone was performed using simultaneous acquisition of 16 sections with a collimated slice thickness of 0.625 mm. All images were transferred to an Amira visualization system for three-dimensional reconstruction. The created three-dimensional model was translated into two commercial modeling packages, Patran and ANSYS, for finite element analysis. The characteristic dimensions of the model were measured and compared with previously published histologic section data. This result confirms that the geometric model created by the proposed method is accurate except that the tympanic membrane is thicker than when measured by the histologic section method. No obvious difference in the geometrical dimension between right and left ossicles was found (P > .05). The three-dimensional model created by finite element method and predicted umbo and stapes displacements are close to the bounds of the experimental curves of Nishihara's, Huber's, Gan's, and Sun's data across the frequency range of 100 to 8000 Hz. The model includes a description of the geometry of the middle ear components and dynamic equations of vibration. The proposed method is quick, practical, low-cost, and, most importantly, noninvasive as compared with histologic section methods.

  1. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    Science.gov (United States)

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  2. Accurate on-chip measurement of the Seebeck coefficient of high mobility small molecule organic semiconductors

    Science.gov (United States)

    Warwick, C. N.; Venkateshvaran, D.; Sirringhaus, H.

    2015-09-01

    We present measurements of the Seebeck coefficient in two high mobility organic small molecules, 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) and 2,9-didecyl-dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene (C10-DNTT). The measurements are performed in a field effect transistor structure with high field effect mobilities of approximately 3 cm2/V s. This allows us to observe both the charge concentration and temperature dependence of the Seebeck coefficient. We find a strong logarithmic dependence upon charge concentration and a temperature dependence within the measurement uncertainty. Despite performing the measurements on highly polycrystalline evaporated films, we see an agreement in the Seebeck coefficient with modelled values from Shi et al. [Chem. Mater. 26, 2669 (2014)] at high charge concentrations. We attribute deviations from the model at lower charge concentrations to charge trapping.

  3. Accurate on-chip measurement of the Seebeck coefficient of high mobility small molecule organic semiconductors

    Directory of Open Access Journals (Sweden)

    C. N. Warwick

    2015-09-01

    Full Text Available We present measurements of the Seebeck coefficient in two high mobility organic small molecules, 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT and 2,9-didecyl-dinaphtho[2,3-b:2′,3′-f]thieno[3,2-b]thiophene (C10-DNTT. The measurements are performed in a field effect transistor structure with high field effect mobilities of approximately 3 cm2/V s. This allows us to observe both the charge concentration and temperature dependence of the Seebeck coefficient. We find a strong logarithmic dependence upon charge concentration and a temperature dependence within the measurement uncertainty. Despite performing the measurements on highly polycrystalline evaporated films, we see an agreement in the Seebeck coefficient with modelled values from Shi et al. [Chem. Mater. 26, 2669 (2014] at high charge concentrations. We attribute deviations from the model at lower charge concentrations to charge trapping.

  4. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos

    2010-01-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii

  5. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2018-01-01

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods

  6. High speed digital TDC for D0 vertex reconstruction

    International Nuclear Information System (INIS)

    Gao Guosheng; Partridge, R.

    1992-01-01

    A high speed digital TDC has been built as part of the Level 0 trigger for the D0 experiment at Fermilab. The digital TDC is used to make a fast determination of the primary vertex position by timing the arrival time of beam jets detected in the Level 0 counters. The vertex position is then used by the Level 1 trigger to determine the proper sinθ weighting factors for calculation transverse energies. Commercial GaAs integrated circuits are used in the digital TDC to obtain a time resolution of σ t == 226 ps

  7. High-Resolution Photoionization, Photoelectron and Photodissociation Studies. Determination of Accurate Energetic and Spectroscopic Database for Combustion Radicals and Molecules

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Cheuk-Yiu [Univ. of California, Davis, CA (United States)

    2016-04-25

    The main goal of this research program was to obtain accurate thermochemical and spectroscopic data, such as ionization energies (IEs), 0 K bond dissociation energies, 0 K heats of formation, and spectroscopic constants for radicals and molecules and their ions of relevance to combustion chemistry. Two unique, generally applicable vacuum ultraviolet (VUV) laser photoion-photoelectron apparatuses have been developed in our group, which have used for high-resolution photoionization, photoelectron, and photodissociation studies for many small molecules of combustion relevance.

  8. Reconstruction of highly contracted socket after irradiation with antral mucosa

    International Nuclear Information System (INIS)

    Tanabe, Yosihiko; Masaki, Michiyosi; Kato, Hisakazu

    1999-01-01

    We have repaired 3 cases of the highly contracted socket after irradiation by lining it using antral mucosa and obtained excellent results. Although this procedure requires rhinological skill to obtain the mucosa, it has an advantage leaving no visible scar at the donor site. Usually it is not difficult to obtain a sufficient quantity of the mucous membrane to line a whole socket from one antrum. Besides, it is also easy to line its surface since the shape of antral mucosa is originally in a sac form. All we have to do is making 20 mm long incision to the mucosa, putting a silicone conformer into it, and inlaying it into the graft bed. Thus, having once obtained the mucous membrane, the surgical procedure itself is a quite simple one. (author)

  9. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats.

    NARCIS (Netherlands)

    Armour, J.A.; Palla, R.; Zeeuwen, P.L.J.M.; Heijer, M. den; Schalkwijk, J.; Hollox, E.J.

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and

  10. Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger

    Directory of Open Access Journals (Sweden)

    Rohr David

    2016-01-01

    at the Large Hadron Collider (LHC at CERN. The High Level Trigger (HLT is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC and the Inner Tracking System (ITS. The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.

  11. Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images

    Directory of Open Access Journals (Sweden)

    Vibha Tiwari

    2015-12-01

    Full Text Available Compressive sensing theory enables faithful reconstruction of signals, sparse in domain $ \\Psi $, at sampling rate lesser than Nyquist criterion, while using sampling or sensing matrix $ \\Phi $ which satisfies restricted isometric property. The role played by sensing matrix $ \\Phi $ and sparsity matrix $ \\Psi $ is vital in faithful reconstruction. If the sensing matrix is dense then it takes large storage space and leads to high computational cost. In this paper, effort is made to design sparse sensing matrix with least incurred computational cost while maintaining quality of reconstructed image. The design approach followed is based on sparse block circulant matrix (SBCM with few modifications. The other used sparse sensing matrix consists of 15 ones in each column. The medical images used are acquired from US, MRI and CT modalities. The image quality measurement parameters are used to compare the performance of reconstructed medical images using various sensing matrices. It is observed that, since Gram matrix of dictionary matrix ($ \\Phi \\Psi \\mathrm{} $ is closed to identity matrix in case of proposed modified SBCM, therefore, it helps to reconstruct the medical images of very good quality.

  12. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  13. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction

    NARCIS (Netherlands)

    Motaal, Abdallah G.; Coolen, Bram F.; Abdurrachim, Desiree; Castro, Rui M.; Prompers, Jeanine J.; Florack, Luc M. J.; Nicolay, Klaas; Strijkers, Gustav J.

    2013-01-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensi ng reconstruction. Key to our

  14. A high-speed computerized tomography image reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1983-01-01

    The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)

  15. Parallelization of an existing high energy physics event reconstruction software package

    International Nuclear Information System (INIS)

    Schiefer, R.; Francis, D.

    1996-01-01

    Software parallelization allows an efficient use of available computing power to increase the performance of applications. In a case study the authors have investigated the parallelization of high energy physics event reconstruction software in terms of costs (effort, computing resource requirements), benefits (performance increase) and the feasibility of a systematic parallelization approach. Guidelines facilitating a parallel implementation are proposed for future software development

  16. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    Science.gov (United States)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  17. Multiple-image hiding using super resolution reconstruction in high-frequency domains

    Science.gov (United States)

    Li, Xiao-Wei; Zhao, Wu-Xiang; Wang, Jun; Wang, Qiong-Hua

    2017-12-01

    In this paper, a robust multiple-image hiding method using the computer-generated integral imaging and the modified super-resolution reconstruction algorithm is proposed. In our work, the host image is first transformed into frequency domains by cellular automata (CA), to assure the quality of the stego-image, the secret images are embedded into the CA high-frequency domains. The proposed method has the following advantages: (1) robustness to geometric attacks because of the memory-distributed property of elemental images, (2) increasing quality of the reconstructed secret images as the scheme utilizes the modified super-resolution reconstruction algorithm. The simulation results show that the proposed multiple-image hiding method outperforms other similar hiding methods and is robust to some geometric attacks, e.g., Gaussian noise and JPEG compression attacks.

  18. Limited Angle Torque Motors Having High Torque Density, Used in Accurate Drive Systems

    Directory of Open Access Journals (Sweden)

    R. Obreja

    2011-01-01

    Full Text Available A torque motor is a special electric motor that is able to develop the highest possible torque in a certain volume. A torque motor usually has a pancake configuration, and is directly jointed to a drive system (without a gear box. A limited angle torque motor is a torque motor that has no rotary electromagnetic field — in certain papers it is referred to as a linear electromagnet. The main intention of the authors for this paper is to present a means for analyzing and designing a limited angle torque motor only through the finite element method. Users nowadays require very high-performance limited angle torque motors with high density torque. It is therefore necessary to develop the highest possible torque in a relatively small volume. A way to design such motors is by using numerical methods based on the finite element method.

  19. Development and operation of a high-throughput accurate-wavelength lens-based spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Ronald E., E-mail: rbell@pppl.gov [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2014-11-15

    A high-throughput spectrometer for the 400–820 nm wavelength range has been developed for charge exchange recombination spectroscopy or general spectroscopy. A large 2160 mm{sup −1} grating is matched with fast f/1.8 200 mm lenses, which provide stigmatic imaging. A precision optical encoder measures the grating angle with an accuracy ≤0.075 arc sec. A high quantum efficiency low-etaloning CCD detector allows operation at longer wavelengths. A patch panel allows input fibers to interface with interchangeable fiber holders that attach to a kinematic mount at the entrance slit. Computer-controlled hardware allows automated control of wavelength, timing, f-number, automated data collection, and wavelength calibration.

  20. Approaches to the accurate characterization of high purity metal fluorides and fluoride glasses

    Science.gov (United States)

    Beary, E. S.; Paulsen, P. J.; Rains, T. C.; Ewing, K. J.; Jaganathan, J.; Aggarwal, I.

    1990-11-01

    The analytical challenges posed by the measurement of trace contaminants in high purity metal fluorides require that innovative chemical preparation procedures be used to enhance existing instrumental techniques. The instrumental techniques used to analyze these difficult matrices must be sensitive enough to detect extremely low levels of trace impurities, and the background interferences derived from the matrix (metal fluoride or glass) must be minimized. A survey of analytical techniques that have the necessary characteristics to analyze these materials will be given. In addition, means of controlling the chemical blank will be presented. Mass and atomic spectrometric techniques will be discussed, specifically graphite furnace atomic absorption spectrometry (GFAAS) and inductively coupled plasma-mass spectrometry (ICP-MS). Analytical procedures using GFAAS and ICP-MS have been developed to determine sub ppb (part per billion) levels of contaminants in high purity fluoride materials.

  1. An experimental device for accurate ultrasounds measurements in liquid foods at high pressure

    International Nuclear Information System (INIS)

    Hidalgo-Baltasar, E; Taravillo, M; Baonza, V G; Sanz, P D; Guignon, B

    2012-01-01

    The use of high hydrostatic pressure to ensure safe and high-quality product has markedly increased in the food industry during the last decade. Ultrasonic sensors can be employed to control such processes in an equivalent way as they are currently used in processes carried out at room pressure. However, their installation, calibration and use are particularly challenging in the context of a high pressure environment. Besides, data about acoustic properties of food under pressure and even for water are quite scarce in the pressure range of interest for food treatment (namely, above 200 MPa). The objective of this work was to establish a methodology to determine the speed of sound in foods under pressure. An ultrasonic sensor using the multiple reflections method was adapted to a lab-scale HHP equipment to determine the speed of sound in water between 253.15 and 348.15 K, and at pressures up to 700 MPa. The experimental speed-of-sound data were compared to the data calculated from the equation of state of water (IAPWS-95 formulation). From this analysis, the way to calibrate cell path was validated. After this calibration procedure, the speed of sound could be determined in liquid foods by using this sensor with a relative uncertainty between (0.22 and 0.32) % at a confidence level of 95 % over the whole pressure domain.

  2. Highly accurate sequence imputation enables precise QTL mapping in Brown Swiss cattle.

    Science.gov (United States)

    Frischknecht, Mirjam; Pausch, Hubert; Bapst, Beat; Signer-Hasler, Heidi; Flury, Christine; Garrick, Dorian; Stricker, Christian; Fries, Ruedi; Gredler-Grandl, Birgit

    2017-12-29

    Within the last few years a large amount of genomic information has become available in cattle. Densities of genomic information vary from a few thousand variants up to whole genome sequence information. In order to combine genomic information from different sources and infer genotypes for a common set of variants, genotype imputation is required. In this study we evaluated the accuracy of imputation from high density chips to whole genome sequence data in Brown Swiss cattle. Using four popular imputation programs (Beagle, FImpute, Impute2, Minimac) and various compositions of reference panels, the accuracy of the imputed sequence variant genotypes was high and differences between the programs and scenarios were small. We imputed sequence variant genotypes for more than 1600 Brown Swiss bulls and performed genome-wide association studies for milk fat percentage at two stages of lactation. We found one and three quantitative trait loci for early and late lactation fat content, respectively. Known causal variants that were imputed from the sequenced reference panel were among the most significantly associated variants of the genome-wide association study. Our study demonstrates that whole-genome sequence information can be imputed at high accuracy in cattle populations. Using imputed sequence variant genotypes in genome-wide association studies may facilitate causal variant detection.

  3. SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.

    Science.gov (United States)

    Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver

    2012-07-15

    In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.

  4. Micro-UFO (Untethered Floating Object: A Highly Accurate Microrobot Manipulation Technique

    Directory of Open Access Journals (Sweden)

    Hüseyin Uvet

    2018-03-01

    Full Text Available A new microrobot manipulation technique with high precision (nano level positional accuracy to move in a liquid environment with diamagnetic levitation is presented. Untethered manipulation of microrobots by means of externally applied magnetic forces has been emerging as a promising field of research, particularly due to its potential for medical and biological applications. The purpose of the presented method is to eliminate friction force between the surface of the substrate and microrobot. In an effort to achieve high accuracy motion, required magnetic force for the levitation of the microrobot was determined by finite element method (FEM simulations in COMSOL (version 5.3, COMSOL Inc., Stockholm, Sweden and verified by experimental results. According to position of the lifter magnet, the levitation height of the microrobot in the liquid was found analytically, and compared with the experimental results head-to-head. The stable working range of the microrobot is between 30 µm to 330 µm, and it was confirmed in both simulations and experimental results. It can follow the given trajectory with high accuracy (<1 µm error avg. at varied speeds and levitation heights. Due to the nano-level positioning accuracy, desired locomotion can be achieved in pre-specified trajectories (sinusoidal or circular. During its locomotion, phase difference between lifter magnet and carrier magnet has been observed, and relation with drag force effect has been discussed. Without using strong electromagnets or bulky permanent magnets, our manipulation approach can move the microrobot in three dimensions in a liquid environment.

  5. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source.

    Science.gov (United States)

    Atwood, Robert C; Bodey, Andrew J; Price, Stephen W T; Basham, Mark; Drakopoulos, Michael

    2015-06-13

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an 'orthogonal' fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and 'facility-independent': it can run on standard cluster infrastructure at any institution.

  6. Computing the PSF with high-resolution reconstruction technique

    Science.gov (United States)

    Su, Xiaofeng; Chen, FanSheng; Yang, Xue; Xue, Yulong; Dong, YucCui

    2016-05-01

    Point spread function (PSF) is a very important indicator of the imaging system; it can describe the filtering characteristics of the imaging system. The image is fuzzy when the PSF is not very well and vice versa. In the remote sensing image process, the image could be restored by using the PSF of the image system to get a clearer picture. So, to measure the PSF of the system is very necessary. Usually we can use the knife edge method, line spread function (LSF) method and streak plate method to get the modulation transfer function (MTF), and then use the relationship of the parameters to calculate the PSF of the system. In the knife edge method, the non-uniformity (NU) of the detector would lead an unstable precision of the edge angle; using the streak plate could get a more stable MTF, but it is only at one frequency point in one direction, so it is not very helpful to get a high-precision PSF. In this paper, we used the image of the point target directly and combined with the energy concentration to calculate the PSF. First we make a point matrix target board and make sure the point can image to a sub-pixel position at the detector array; then we use the center of gravity to locate the point targets image to get the energy concentration; then we fusion the targets image together by using the characteristics of sub-pixel and get a stable PSF of the system. Finally we use the simulation results to confirm the accuracy of the method.

  7. Accurate Mass Measurements for Planetary Microlensing Events Using High Angular Resolution Observations

    Directory of Open Access Journals (Sweden)

    Jean-Philippe Beaulieu

    2018-04-01

    Full Text Available The microlensing technique is a unique method to hunt for cold planets over a range of mass and separation, orbiting all varieties of host stars in the disk of our galaxy. It provides precise mass-ratio and projected separations in units of the Einstein ring radius. In order to obtain the physical parameters (mass, distance, orbital separation of the system, it is necessary to combine the result of light curve modeling with lens mass-distance relations and/or perform a Bayesian analysis with a galactic model. A first mass-distance relation could be obtained from a constraint on the Einstein ring radius if the crossing time of the source over the caustic is measured. It could then be supplemented by secondary constraints such as parallax measurements, ideally by using coinciding ground and space-born observations. These are still subject to degeneracies, like the orbital motion of the lens. A third mass-distance relation can be obtained thanks to constraints on the lens luminosity using high angular resolution observations with 8 m class telescopes or the Hubble Space Telescope. The latter route, although quite inexpensive in telescope time is very effective. If we have to rely heavily on Bayesian analysis and limited constraints on mass-distance relations, the physical parameters are determined to 30–40% typically. In a handful of cases, ground-space parallax is a powerful route to get stronger constraint on masses. High angular resolution observations will be able to constrain the luminosity of the lenses in the majority of the cases, and in favorable circumstances it is possible to derive physical parameters to 10% or better. Moreover, these constraints will be obtained in most of the planets to be discovered by the Euclid and WFIRST satellites. We describe here the state-of-the-art approaches to measure lens masses and distances with an emphasis on high angular resolution observations. We will discuss the challenges, recent results and

  8. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    Li Xiuqin; Zhang Feng; Sun Yanyan; Yong Wei; Chu Xiaogang; Fang Yanyan; Zweigenbaum, Jerry

    2008-01-01

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M + H]+ or the deprotonated molecules [M - H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0 mg.kg -1 concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0 mg.kg -1 -100 mg.kg -1 are 81-106%, with coefficients of variation -1 , which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff

  9. Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation

    Science.gov (United States)

    Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.

    2016-06-01

    Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.

  10. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    Science.gov (United States)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  11. Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description

    Directory of Open Access Journals (Sweden)

    Jan Hackenberg

    2014-05-01

    Full Text Available This paper presents a method for fitting cylinders into a point cloud, derived from a terrestrial laser-scanned tree. Utilizing high scan quality data as the input, the resulting models describe the branching structure of the tree, capable of detecting branches with a diameter smaller than a centimeter. The cylinders are stored as a hierarchical tree-like data structure encapsulating parent-child neighbor relations and incorporating the tree’s direction of growth. This structure enables the efficient extraction of tree components, such as the stem or a single branch. The method was validated both by applying a comparison of the resulting cylinder models with ground truth data and by an analysis between the input point clouds and the models. Tree models were accomplished representing more than 99% of the input point cloud, with an average distance from the cylinder model to the point cloud within sub-millimeter accuracy. After validation, the method was applied to build two allometric models based on 24 tree point clouds as an example of the application. Computation terminated successfully within less than 30 min. For the model predicting the total above ground volume, the coefficient of determination was 0.965, showing the high potential of terrestrial laser-scanning for forest inventories.

  12. Accurate molecular diagnosis of phenylketonuria and tetrahydrobiopterin-deficient hyperphenylalaninemias using high-throughput targeted sequencing

    Science.gov (United States)

    Trujillano, Daniel; Perez, Belén; González, Justo; Tornador, Cristian; Navarrete, Rosa; Escaramis, Georgia; Ossowski, Stephan; Armengol, Lluís; Cornejo, Verónica; Desviat, Lourdes R; Ugarte, Magdalena; Estivill, Xavier

    2014-01-01

    Genetic diagnostics of phenylketonuria (PKU) and tetrahydrobiopterin (BH4) deficient hyperphenylalaninemia (BH4DH) rely on methods that scan for known mutations or on laborious molecular tools that use Sanger sequencing. We have implemented a novel and much more efficient strategy based on high-throughput multiplex-targeted resequencing of four genes (PAH, GCH1, PTS, and QDPR) that, when affected by loss-of-function mutations, cause PKU and BH4DH. We have validated this approach in a cohort of 95 samples with the previously known PAH, GCH1, PTS, and QDPR mutations and one control sample. Pooled barcoded DNA libraries were enriched using a custom NimbleGen SeqCap EZ Choice array and sequenced using a HiSeq2000 sequencer. The combination of several robust bioinformatics tools allowed us to detect all known pathogenic mutations (point mutations, short insertions/deletions, and large genomic rearrangements) in the 95 samples, without detecting spurious calls in these genes in the control sample. We then used the same capture assay in a discovery cohort of 11 uncharacterized HPA patients using a MiSeq sequencer. In addition, we report the precise characterization of the breakpoints of four genomic rearrangements in PAH, including a novel deletion of 899 bp in intron 3. Our study is a proof-of-principle that high-throughput-targeted resequencing is ready to substitute classical molecular methods to perform differential genetic diagnosis of hyperphenylalaninemias, allowing the establishment of specifically tailored treatments a few days after birth. PMID:23942198

  13. High resolution melting analysis: a rapid and accurate method to detect CALR mutations.

    Directory of Open Access Journals (Sweden)

    Cristina Bilbao-Sieyro

    Full Text Available The recent discovery of CALR mutations in essential thrombocythemia (ET and primary myelofibrosis (PMF patients without JAK2/MPL mutations has emerged as a relevant finding for the molecular diagnosis of these myeloproliferative neoplasms (MPN. We tested the feasibility of high-resolution melting (HRM as a screening method for rapid detection of CALR mutations.CALR was studied in wild-type JAK2/MPL patients including 34 ET, 21 persistent thrombocytosis suggestive of MPN and 98 suspected secondary thrombocytosis. CALR mutation analysis was performed through HRM and Sanger sequencing. We compared clinical features of CALR-mutated versus 45 JAK2/MPL-mutated subjects in ET.Nineteen samples showed distinct HRM patterns from wild-type. Of them, 18 were mutations and one a polymorphism as confirmed by direct sequencing. CALR mutations were present in 44% of ET (15/34, 14% of persistent thrombocytosis suggestive of MPN (3/21 and none of the secondary thrombocytosis (0/98. Of the 18 mutants, 9 were 52 bp deletions, 8 were 5 bp insertions and other was a complex mutation with insertion/deletion. No mutations were found after sequencing analysis of 45 samples displaying wild-type HRM curves. HRM technique was reproducible, no false positive or negative were detected and the limit of detection was of 3%.This study establishes a sensitive, reliable and rapid HRM method to screen for the presence of CALR mutations.

  14. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    International Nuclear Information System (INIS)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  15. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke, E-mail: ksheng@mednet.ucla.edu [Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California 90024 (United States)

    2015-11-15

    Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was

  16. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery.

    Science.gov (United States)

    Yu, Victoria Y; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A; Sheng, Ke

    2015-11-01

    Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was attributed to phantom setup

  17. Automated discrete electron tomography - Towards routine high-fidelity reconstruction of nanomaterials.

    Science.gov (United States)

    Zhuge, Xiaodong; Jinnai, Hiroshi; Dunin-Borkowski, Rafal E; Migunov, Vadim; Bals, Sara; Cool, Pegie; Bons, Anton-Jan; Batenburg, Kees Joost

    2017-04-01

    Electron tomography is an essential imaging technique for the investigation of morphology and 3D structure of nanomaterials. This method, however, suffers from well-known missing wedge artifacts due to a restricted tilt range, which limits the objectiveness, repeatability and efficiency of quantitative structural analysis. Discrete tomography represents one of the promising reconstruction techniques for materials science, potentially capable of delivering higher fidelity reconstructions by exploiting the prior knowledge of the limited number of material compositions in a specimen. However, the application of discrete tomography to practical datasets remains a difficult task due to the underlying challenging mathematical problem. In practice, it is often hard to obtain consistent reconstructions from experimental datasets. In addition, numerous parameters need to be tuned manually, which can lead to bias and non-repeatability. In this paper, we present the application of a new iterative reconstruction technique, named TVR-DART, for discrete electron tomography. The technique is capable of consistently delivering reconstructions with significantly reduced missing wedge artifacts for a variety of challenging data and imaging conditions, and can automatically estimate its key parameters. We describe the principles of the technique and apply it to datasets from three different types of samples acquired under diverse imaging modes. By further reducing the available tilt range and number of projections, we show that the proposed technique can still produce consistent reconstructions with minimized missing wedge artifacts. This new development promises to provide the electron microscopy community with an easy-to-use and robust tool for high-fidelity 3D characterization of nanomaterials. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A highly accurate absolute gravimetric network for Albania, Kosovo and Montenegro

    Science.gov (United States)

    Ullrich, Christian; Ruess, Diethard; Butta, Hubert; Qirko, Kristaq; Pavicevic, Bozidar; Murat, Meha

    2016-04-01

    The objective of this project is to establish a basic gravity network in Albania, Kosovo and Montenegro to enable further investigations in geodetic and geophysical issues. Therefore the first time in history absolute gravity measurements were performed in these countries. The Norwegian mapping authority Kartverket is assisting the national mapping authorities in Kosovo (KCA) (Kosovo Cadastral Agency - Agjencia Kadastrale e Kosovës), Albania (ASIG) (Autoriteti Shtetëror i Informacionit Gjeohapësinor) and in Montenegro (REA) (Real Estate Administration of Montenegro - Uprava za nekretnine Crne Gore) in improving the geodetic frameworks. The gravity measurements are funded by Kartverket. The absolute gravimetric measurements were performed from BEV (Federal Office of Metrology and Surveying) with the absolute gravimeter FG5-242. As a national metrology institute (NMI) the Metrology Service of the BEV maintains the national standards for the realisation of the legal units of measurement and ensures their international equivalence and recognition. Laser and clock of the absolute gravimeter were calibrated before and after the measurements. The absolute gravimetric survey was carried out from September to October 2015. Finally all 8 scheduled stations were successfully measured: there are three stations located in Montenegro, two stations in Kosovo and three stations in Albania. The stations are distributed over the countries to establish a gravity network for each country. The vertical gradients were measured at all 8 stations with the relative gravimeter Scintrex CG5. The high class quality of some absolute gravity stations can be used for gravity monitoring activities in future. The measurement uncertainties of the absolute gravity measurements range around 2.5 micro Gal at all stations (1 microgal = 10-8 m/s2). In Montenegro the large gravity difference of 200 MilliGal between station Zabljak and Podgorica can be even used for calibration of relative gravimeters

  19. A high-order time-accurate interrogation method for time-resolved PIV

    International Nuclear Information System (INIS)

    Lynch, Kyle; Scarano, Fulvio

    2013-01-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In

  20. Full field image reconstruction is suitable for high-pitch dual-source computed tomography.

    Science.gov (United States)

    Mahnken, Andreas H; Allmendinger, Thomas; Sedlmair, Martin; Tamm, Miriam; Reinartz, Sebastian D; Flohr, Thomas

    2012-11-01

    The field of view (FOV) in high-pitch dual-source computed tomography (DSCT) is limited by the size of the second detector. The goal of this study was to develop and evaluate a full FOV image reconstruction technique for high-pitch DSCT. For reconstruction beyond the FOV of the second detector, raw data of the second system were extended to the full dimensions of the first system, using the partly existing data of the first system in combination with a very smooth transition weight function. During the weighted filtered backprojection, the data of the second system were applied with an additional weighting factor. This method was tested for different pitch values from 1.5 to 3.5 on a simulated phantom and on 25 high-pitch DSCT data sets acquired at pitch values of 1.6, 2.0, 2.5, 2.8, and 3.0. Images were reconstructed with FOV sizes of 260 × 260 and 500 × 500 mm. Image quality was assessed by 2 radiologists using a 5-point Likert scale and analyzed with repeated-measure analysis of variance. In phantom and patient data, full FOV image quality depended on pitch. Where complete projection data from both tube-detector systems were available, image quality was unaffected by pitch changes. Full FOV image quality was not compromised at pitch values of 1.6 and remained fully diagnostic up to a pitch of 2.0. At higher pitch values, there was an increasing difference in image quality between limited and full FOV images (P = 0.0097). With this new image reconstruction technique, full FOV image reconstruction can be used up to a pitch of 2.0.

  1. High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method

    International Nuclear Information System (INIS)

    Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.

    1984-01-01

    Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)

  2. Automated discrete electron tomography – Towards routine high-fidelity reconstruction of nanomaterials

    Energy Technology Data Exchange (ETDEWEB)

    Zhuge, Xiaodong [Computational Imaging, Centrum Wiskunde & Informatica, Science park 123, 1098XG Amsterdam (Netherlands); Jinnai, Hiroshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Katahira 2-1-1, Aoba-ku, Sendai 980-8577 (Japan); Dunin-Borkowski, Rafal E.; Migunov, Vadim [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Bals, Sara [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Cool, Pegie [Laboratory of Adsorption and Catalysis, Department of Chemistry, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Bons, Anton-Jan [European Technology Center, ExxonMobil Chemical Europe Inc., Hermeslaan 2, B-1831 Machelen (Belgium); Batenburg, Kees Joost [Computational Imaging, Centrum Wiskunde & Informatica, Science park 123, 1098XG Amsterdam (Netherlands)

    2017-04-15

    Electron tomography is an essential imaging technique for the investigation of morphology and 3D structure of nanomaterials. This method, however, suffers from well-known missing wedge artifacts due to a restricted tilt range, which limits the objectiveness, repeatability and efficiency of quantitative structural analysis. Discrete tomography represents one of the promising reconstruction techniques for materials science, potentially capable of delivering higher fidelity reconstructions by exploiting the prior knowledge of the limited number of material compositions in a specimen. However, the application of discrete tomography to practical datasets remains a difficult task due to the underlying challenging mathematical problem. In practice, it is often hard to obtain consistent reconstructions from experimental datasets. In addition, numerous parameters need to be tuned manually, which can lead to bias and non-repeatability. In this paper, we present the application of a new iterative reconstruction technique, named TVR-DART, for discrete electron tomography. The technique is capable of consistently delivering reconstructions with significantly reduced missing wedge artifacts for a variety of challenging data and imaging conditions, and can automatically estimate its key parameters. We describe the principles of the technique and apply it to datasets from three different types of samples acquired under diverse imaging modes. By further reducing the available tilt range and number of projections, we show that the proposed technique can still produce consistent reconstructions with minimized missing wedge artifacts. This new development promises to provide the electron microscopy community with an easy-to-use and robust tool for high-fidelity 3D characterization of nanomaterials. - Highlights: • Automated discrete electron tomography capable of consistently delivering reconstructions with significantly reduced missing wedge artifacts and requires significantly

  3. Automated discrete electron tomography – Towards routine high-fidelity reconstruction of nanomaterials

    International Nuclear Information System (INIS)

    Zhuge, Xiaodong; Jinnai, Hiroshi; Dunin-Borkowski, Rafal E.; Migunov, Vadim; Bals, Sara; Cool, Pegie; Bons, Anton-Jan; Batenburg, Kees Joost

    2017-01-01

    Electron tomography is an essential imaging technique for the investigation of morphology and 3D structure of nanomaterials. This method, however, suffers from well-known missing wedge artifacts due to a restricted tilt range, which limits the objectiveness, repeatability and efficiency of quantitative structural analysis. Discrete tomography represents one of the promising reconstruction techniques for materials science, potentially capable of delivering higher fidelity reconstructions by exploiting the prior knowledge of the limited number of material compositions in a specimen. However, the application of discrete tomography to practical datasets remains a difficult task due to the underlying challenging mathematical problem. In practice, it is often hard to obtain consistent reconstructions from experimental datasets. In addition, numerous parameters need to be tuned manually, which can lead to bias and non-repeatability. In this paper, we present the application of a new iterative reconstruction technique, named TVR-DART, for discrete electron tomography. The technique is capable of consistently delivering reconstructions with significantly reduced missing wedge artifacts for a variety of challenging data and imaging conditions, and can automatically estimate its key parameters. We describe the principles of the technique and apply it to datasets from three different types of samples acquired under diverse imaging modes. By further reducing the available tilt range and number of projections, we show that the proposed technique can still produce consistent reconstructions with minimized missing wedge artifacts. This new development promises to provide the electron microscopy community with an easy-to-use and robust tool for high-fidelity 3D characterization of nanomaterials. - Highlights: • Automated discrete electron tomography capable of consistently delivering reconstructions with significantly reduced missing wedge artifacts and requires significantly

  4. High Resolution Spatiotemporal Climate Reconstruction and Variability in East Asia during Little Ice Age

    Science.gov (United States)

    Lin, K. H. E.; Wang, P. K.; Lee, S. Y.; Liao, Y. C.; Fan, I. C.; Liao, H. M.

    2017-12-01

    The Little ice Age (LIA) is one of the most prominent epochs in paleoclimate reconstruction of the Common Era. While the signals of LIA were generally discovered across hemispheres, wide arrays of regional variability were found, and the reconstructed anomalies were sometimes inconsistent across studies by using various proxy data or historical records. This inconsistency is mainly attributed to limited data coverage at fine resolution that can assist high-resolution climate reconstruction in the continuous spatiotemporal trends. Qing dynasty (1644-1911 CE) of China existed in the coldest period of LIA. Owing to a long-standing tradition that acquired local officials to record odds and social or meteorological events, thousands of local chronicles were left. Zhang eds. (2004) took two decades to compile all these meteorological records in a compendium, for which we then digitized and coded all records into our REACHS database system for reconstructing climate. There were in total 1,435 points (sites) in our database for over 80,000 events in the period of time. After implementing two-rounds coding check for data quality control (accuracy rate 87.2%), multiple indexes were retrieved for reconstructing annually and seasonally resolved temperature and precipitation series for North, Central, and South China. The reconstruction methods include frequency count and grading, with usage of multiple regression models to test sensitivity and to calculate correlations among several reconstructed series. Validation was also conducted through comparison with instrumental data and with other reconstructed series in previous studies. Major research results reveal interannual (3-5 years), decadal (8-12 years), and interdecadal (≈30 years) variabilities with strong regional expressions across East China. Cooling effect was not homogenously distributed in space and time. Flood and drought conditions frequently repeated but the spatiotemporal pattern was variant, indicating likely

  5. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    Science.gov (United States)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  6. Highly accurate Michelson type wavelength meter that uses a rubidium stabilized 1560 nm diode laser as a wavelength reference

    International Nuclear Information System (INIS)

    Masuda, Shin; Kanoh, Eiji; Irisawa, Akiyoshi; Niki, Shoji

    2009-01-01

    We investigated the accuracy limitation of a wavelength meter installed in a vacuum chamber to enable us to develop a highly accurate meter based on a Michelson interferometer in 1550 nm optical communication bands. We found that an error of parts per million order could not be avoided using famous wavelength compensation equations. Chromatic dispersion of the refractive index in air can almost be disregarded when a 1560 nm wavelength produced by a rubidium (Rb) stabilized distributed feedback (DFB) diode laser is used as a reference wavelength. We describe a novel dual-wavelength self-calibration scheme that maintains high accuracy of the wavelength meter. The method uses the fundamental and second-harmonic wavelengths of an Rb-stabilized DFB diode laser. Consequently, a highly accurate Michelson type wavelength meter with an absolute accuracy of 5x10 -8 (10 MHz, 0.08 pm) over a wide wavelength range including optical communication bands was achieved without the need for a vacuum chamber.

  7. Simultaneous static and cine nonenhanced MR angiography using radial sampling and highly constrained back projection reconstruction.

    Science.gov (United States)

    Koktzoglou, Ioannis; Mistretta, Charles A; Giri, Shivraman; Dunkle, Eugene E; Amin, Parag; Edelman, Robert R

    2014-10-01

    To describe a pulse sequence for simultaneous static and cine nonenhanced magnetic resonance angiography (NEMRA) of the peripheral arteries. The peripheral arteries of 10 volunteers and 6 patients with peripheral arterial disease (PAD) were imaged with the proposed cine NEMRA sequence on a 1.5 Tesla (T) system. The impact of multi-shot imaging and highly constrained back projection (HYPR) reconstruction was examined. The propagation rate of signal along the length of the arterial tree in the cine nonenhanced MR angiograms was quantified. The cine NEMRA sequence simultaneously provided a static MR angiogram showing vascular anatomy as well as a cine display of arterial pulse wave propagation along the entire length of the peripheral arteries. Multi-shot cine NEMRA improved temporal resolution and reduced image artifacts. HYPR reconstruction improved image quality when temporal reconstruction footprints shorter than 100 ms were used (P cine NEMRA was slower in patients with PAD than in volunteers. Simultaneous static and cine NEMRA of the peripheral arteries is feasible. Multi-shot acquisition and HYPR reconstruction can be used to improve arterial conspicuity and temporal resolution. Copyright © 2013 Wiley Periodicals, Inc.

  8. Statistical list-mode image reconstruction for the high resolution research tomograph

    International Nuclear Information System (INIS)

    Rahmim, A; Lenox, M; Reader, A J; Michel, C; Burbar, Z; Ruth, T J; Sossi, V

    2004-01-01

    We have investigated statistical list-mode reconstruction applicable to a depth-encoding high resolution research tomograph. An image non-negativity constraint has been employed in the reconstructions and is shown to effectively remove the overestimation bias introduced by the sinogram non-negativity constraint. We have furthermore implemented a convergent subsetized (CS) list-mode reconstruction algorithm, based on previous work (Hsiao et al 2002 Conf. Rec. SPIE Med. Imaging 4684 10-19; Hsiao et al 2002 Conf. Rec. IEEE Int. Symp. Biomed. Imaging 409-12) on convergent histogram OSEM reconstruction. We have demonstrated that the first step of the convergent algorithm is exactly equivalent (unlike the histogram-mode case) to the regular subsetized list-mode EM algorithm, while the second and final step takes the form of additive updates in image space. We have shown that in terms of contrast, noise as well as FWHM width behaviour, the CS algorithm is robust and does not result in limit cycles. A hybrid algorithm based on the ordinary and the convergent algorithms is also proposed, and is shown to combine the advantages of the two algorithms (i.e. it is able to reach a higher image quality in fewer iterations while maintaining the convergent behaviour), making the hybrid approach a good alternative to the ordinary subsetized list-mode EM algorithm

  9. Energy reconstruction of hadrons in highly granular combined ECAL and HCAL systems

    Science.gov (United States)

    Israeli, Y.

    2018-05-01

    This paper discusses the hadronic energy reconstruction of two combined electromagnetic and hadronic calorimeter systems using physics prototypes of the CALICE collaboration: the silicon-tungsten electromagnetic calorimeter (Si-W ECAL) and the scintillator-SiPM based analog hadron calorimeter (AHCAL); and the scintillator-tungsten electromagnetic calorimeter (ScECAL) and the AHCAL. These systems were operated in hadron beams at CERN and FNAL, permitting the study of the performance in combined ECAL and HCAL systems. Two techniques for the energy reconstruction are used, a standard reconstruction based on calibrated sub-detector energy sums, and one based on a software compensation algorithm making use of the local energy density information provided by the high granularity of the detectors. The software compensation-based algorithm improves the hadronic energy resolution by up to 30% compared to the standard reconstruction. The combined system data show comparable energy resolutions to the one achieved for data with showers starting only in the AHCAL and therefore demonstrate the success of the inter-calibration of the different sub-systems, despite of their different geometries and different readout technologies.

  10. Track reconstruction in the CMS experiment for the High Luminosity LHC

    CERN Document Server

    AUTHOR|(CDS)2087955; Innocente, Vincenzo

    Tracking is one of the crucial parts in the event reconstruction because of its importance in the estimation of particle momenta, their identification, and in the estimation of decay vertices. This task is very challenging at the LHC, given the hundreds or even thousands of particles generated in each bunch crossing. Track reconstruction in CMS was developed following an iterative philosophy. It uses an adaptation of the combinatorial KF algorithm to allow pattern recognition and track fitting to occur in the same framework. For ttbar events under typical Run-1 pile-up conditions, the average track reconstruction efficiency for charged particles with transverse momenta of $p_T > 0.9$ GeV is 94\\% for $| \\eta | < 0.9$ and 85% for $0.9 < | \\eta | < 2.5$. During LS1, some developments were made in different aspects of tracking. In particular, I implemented the DAF algorithm to protect track reconstruction against wrong hit assignments in noisy environments or in high track density environments. The DAF a...

  11. Heap: a highly sensitive and accurate SNP detection tool for low-coverage high-throughput sequencing data

    KAUST Repository

    Kobayashi, Masaaki

    2017-04-20

    Recent availability of large-scale genomic resources enables us to conduct so called genome-wide association studies (GWAS) and genomic prediction (GP) studies, particularly with next-generation sequencing (NGS) data. The effectiveness of GWAS and GP depends on not only their mathematical models, but the quality and quantity of variants employed in the analysis. In NGS single nucleotide polymorphism (SNP) calling, conventional tools ideally require more reads for higher SNP sensitivity and accuracy. In this study, we aimed to develop a tool, Heap, that enables robustly sensitive and accurate calling of SNPs, particularly with a low coverage NGS data, which must be aligned to the reference genome sequences in advance. To reduce false positive SNPs, Heap determines genotypes and calls SNPs at each site except for sites at the both ends of reads or containing a minor allele supported by only one read. Performance comparison with existing tools showed that Heap achieved the highest F-scores with low coverage (7X) restriction-site associated DNA sequencing reads of sorghum and rice individuals. This will facilitate cost-effective GWAS and GP studies in this NGS era. Code and documentation of Heap are freely available from https://github.com/meiji-bioinf/heap (29 March 2017, date last accessed) and our web site (http://bioinf.mind.meiji.ac.jp/lab/en/tools.html (29 March 2017, date last accessed)).

  12. A track reconstructing low-latency trigger processor for high-energy physics

    International Nuclear Information System (INIS)

    Cuveland, Jan de

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 μs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 μs with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  13. A track reconstructing low-latency trigger processor for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Cuveland, Jan de

    2009-09-17

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 {mu}s to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 {mu}s with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  14. High quality 3D shape reconstruction via digital refocusing and pupil apodization in multi-wavelength holographic interferometry

    Science.gov (United States)

    Xu, Li

    Multi-wavelength holographic interferometry (MWHI) has good potential for evolving into a high quality 3D shape reconstruction technique. There are several remaining challenges, including I) depth-of-field limitation, leading to axial dimension inaccuracy of out-of-focus objects; and 2) smearing from shiny smooth objects to their dark dull neighbors, generating fake measurements within the dark area. This research is motivated by the goal of developing an advanced optical metrology system that provides accurate 3D profiles for target object or objects of axial dimension larger than the depth-of-field, and for objects with dramatically different surface conditions. The idea of employing digital refocusing in MWHI has been proposed as a solution to the depth-of-field limitation. One the one hand, traditional single wavelength refocusing formula is revised to reduce sensitivity to wavelength error. Investigation over real example demonstrates promising accuracy and repeatability of reconstructed 3D profiles. On the other hand, a phase contrast based focus detection criterion is developed especially for MWHI, which overcomes the problem of phase unwrapping. The combination for these two innovations gives birth to a systematic strategy of acquiring high quality 3D profiles. Following the first phase contrast based focus detection step, interferometric distance measurement by MWHI is implemented as a next step to conduct relative focus detection with high accuracy. This strategy results in +/-100mm 3D profile with micron level axial accuracy, which is not available in traditional extended focus image (EFI) solutions. Pupil apodization has been implemented to address the second challenge of smearing. The process of reflective rough surface inspection has been mathematically modeled, which explains the origin of stray light and the necessity of replacing hard-edged pupil with one of gradually attenuating transmission (apodization). Metrics to optimize pupil types and

  15. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    Science.gov (United States)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  16. High performance interactive graphics for shower reconstruction in HPC, the DELPHI barrel electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Stanescu, C.

    1990-01-01

    Complex software for shower reconstruction in DELPHI barrel electromagnetic calorimeter which deals, for each event, with great amounts of information, due to the high spatial resolution of this detector, needs powerful verification tools. An interactive graphics program, running on high performance graphics display system Whizzard 7555 from Megatek, was developed to display the logical steps in showers and their axes reconstruction. The program allows both operations on the image in real-time (rotation, translation and zoom) and the use of non-geometrical criteria to modify it (as the use of energy) thresholds for the representation of the elements that compound the showers (or of the associated lego plots). For this purpose graphics objects associated to user parameters were defined. Instancing and modelling features of the native graphics library were extensively used

  17. High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI

    Science.gov (United States)

    Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer

    2011-03-01

    Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.

  18. Three-dimensional reconstruction of highly complex microscopic samples using scanning electron microscopy and optical flow estimation.

    Directory of Open Access Journals (Sweden)

    Ahmadreza Baghaie

    Full Text Available Scanning Electron Microscope (SEM as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D. In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.

  19. TH-E-17A-02: High-Pitch and Sparse-View Helical 4D CT Via Iterative Image Reconstruction Method Based On Tensor Framelet

    International Nuclear Information System (INIS)

    Guo, M; Nam, H; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: 4D CT is routinely performed during radiation therapy treatment planning of thoracic and abdominal cancers. Compared with the cine mode, the helical mode is advantageous in temporal resolution. However, a low pitch (∼0.1) for 4D CT imaging is often required instead of the standard pitch (∼1) for static imaging, since standard image reconstruction based on analytic method requires the low-pitch scanning in order to satisfy the data sufficient condition when reconstructing each temporal frame individually. In comparison, the flexible iterative method enables the reconstruction of all temporal frames simultaneously, so that the image similarity among frames can be utilized to possibly perform high-pitch and sparse-view helical 4D CT imaging. The purpose of this work is to investigate such an exciting possibility for faster imaging with lower dose. Methods: A key for highpitch and sparse-view helical 4D CT imaging is the simultaneous reconstruction of all temporal frames using the prior that temporal frames are continuous along the temporal direction. In this work, such a prior is regularized through the sparsity transform based on spatiotemporal tensor framelet (TF) as a multilevel and high-order extension of total variation transform. Moreover, GPU-based fast parallel computing of X-ray transform and its adjoint together with split Bregman method is utilized for solving the 4D image reconstruction problem efficiently and accurately. Results: The simulation studies based on 4D NCAT phantoms were performed with various pitches (i.e., 0.1, 0.2, 0.5, and 1) and sparse views (i.e., 400 views per rotation instead of standard >2000 views per rotation), using 3D iterative individual reconstruction method based on 3D TF and 4D iterative simultaneous reconstruction method based on 4D TF respectively. Conclusion: The proposed TF-based simultaneous 4D image reconstruction method enables high-pitch and sparse-view helical 4D CT with lower dose and faster speed

  20. Combined Anterior Cruciate Ligament Reconstruction and High Tibial Osteotomy in Anterior Cruciate Ligament-Deficient Varus Knees

    Directory of Open Access Journals (Sweden)

    Ayman M. Ebied

    2017-12-01

    Conclusion: The combined procedure of ACL reconstruction and high tibial osteotomy restored knee stability and reduced pain over the medial compartment. Although the combined procedure has a longer period of rehabilitation than an isolated ACL reconstruction, the elimination of lateral thrust and preservation of articular cartilage of the medial compartment are of paramount importance to the future of these knees.

  1. High Concentrations of Measles Neutralizing Antibodies and High-Avidity Measles IgG Accurately Identify Measles Reinfection Cases

    Science.gov (United States)

    Rota, Jennifer S.; Hickman, Carole J.; Mercader, Sara; Redd, Susan; McNall, Rebecca J.; Williams, Nobia; McGrew, Marcia; Walls, M. Laura; Rota, Paul A.; Bellini, William J.

    2016-01-01

    In the United States, approximately 9% of the measles cases reported from 2012 to 2014 occurred in vaccinated individuals. Laboratory confirmation of measles in vaccinated individuals is challenging since IgM assays can give inconclusive results. Although a positive reverse transcription (RT)-PCR assay result from an appropriately timed specimen can provide confirmation, negative results may not rule out a highly suspicious case. Detection of high-avidity measles IgG in serum samples provides laboratory evidence of a past immunologic response to measles from natural infection or immunization. High concentrations of measles neutralizing antibody have been observed by plaque reduction neutralization (PRN) assays among confirmed measles cases with high-avidity IgG, referred to here as reinfection cases (RICs). In this study, we evaluated the utility of measuring levels of measles neutralizing antibody to distinguish RICs from noncases by receiver operating characteristic curve analysis. Single and paired serum samples with high-avidity measles IgG from suspected measles cases submitted to the CDC for routine surveillance were used for the analysis. The RICs were confirmed by a 4-fold rise in PRN titer or by RT-quantitative PCR (RT-qPCR) assay, while the noncases were negative by both assays. Discrimination accuracy was high with serum samples collected ≥3 days after rash onset (area under the curve, 0.953; 95% confidence interval [CI], 0.854 to 0.993). Measles neutralizing antibody concentrations of ≥40,000 mIU/ml identified RICs with 90% sensitivity (95% CI, 74 to 98%) and 100% specificity (95% CI, 82 to 100%). Therefore, when serological or RT-qPCR results are unavailable or inconclusive, suspected measles cases with high-avidity measles IgG can be confirmed as RICs by measles neutralizing antibody concentrations of ≥40,000 mIU/ml. PMID:27335386

  2. Weighted simultaneous algebraic reconstruction technique for tomosynthesis imaging of objects with high-attenuation features

    International Nuclear Information System (INIS)

    Levakhina, Y. M.; Müller, J.; Buzug, T. M.; Duschka, R. L.; Vogt, F.; Barkhausen, J.

    2013-01-01

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position an individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical

  3. Weighted simultaneous algebraic reconstruction technique for tomosynthesis imaging of objects with high-attenuation features

    Energy Technology Data Exchange (ETDEWEB)

    Levakhina, Y. M. [Institute of Medical Engineering, University of Luebeck, Luebeck 23562, Germany and Graduate School for Computing in Medicine and Life Sciences, Luebeck 23562 (Germany); Mueller, J.; Buzug, T. M. [Institute of Medical Engineering, University of Luebeck, Luebeck 23562 (Germany); Duschka, R. L.; Vogt, F.; Barkhausen, J. [Clinic for Radiology, University Clinics Schleswig-Holstein, Luebeck 23562 (Germany)

    2013-03-15

    Purpose: This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts. Methods: The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position an individual weighting coefficient for the updating term is calculated. Results: The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures. Conclusions: Results indicate the feasibility of the proposed algorithm to reduce typical

  4. A different interpretation of Einstein's viscosity equation provides accurate representations of the behavior of hydrophilic solutes to high concentrations.

    Science.gov (United States)

    Zavitsas, Andreas A

    2012-08-23

    Viscosities of aqueous solutions of many highly soluble hydrophilic solutes with hydroxyl and amino groups are examined with a focus on improving the concentration range over which Einstein's relationship between solution viscosity and solute volume, V, is applicable accurately. V is the hydrodynamic effective volume of the solute, including any water strongly bound to it and acting as a single entity with it. The widespread practice is to relate the relative viscosity of solute to solvent, η/η(0), to V/V(tot), where V(tot) is the total volume of the solution. For solutions that are not infinitely dilute, it is shown that the volume ratio must be expressed as V/V(0), where V(0) = V(tot) - V. V(0) is the volume of water not bound to the solute, the "free" water solvent. At infinite dilution, V/V(0) = V/V(tot). For the solutions examined, the proportionality constant between the relative viscosity and volume ratio is shown to be 2.9, rather than the 2.5 commonly used. To understand the phenomena relating to viscosity, the hydrodynamic effective volume of water is important. It is estimated to be between 54 and 85 cm(3). With the above interpretations of Einstein's equation, which are consistent with his stated reasoning, the relation between the viscosity and volume ratio remains accurate to much higher concentrations than those attainable with any of the other relations examined that express the volume ratio as V/V(tot).

  5. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.

    Science.gov (United States)

    Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2013-04-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical. Copyright © 2012 John Wiley & Sons, Ltd.

  6. An eigenfunction method for reconstruction of large-scale and high-contrast objects.

    Science.gov (United States)

    Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P

    2007-07-01

    A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.

  7. High-throughput volumetric reconstruction for 3D wheat plant architecture studies

    Directory of Open Access Journals (Sweden)

    Wei Fang

    2016-09-01

    Full Text Available For many tiller crops, the plant architecture (PA, including the plant fresh weight, plant height, number of tillers, tiller angle and stem diameter, significantly affects the grain yield. In this study, we propose a method based on volumetric reconstruction for high-throughput three-dimensional (3D wheat PA studies. The proposed methodology involves plant volumetric reconstruction from multiple images, plant model processing and phenotypic parameter estimation and analysis. This study was performed on 80 Triticum aestivum plants, and the results were analyzed. Comparing the automated measurements with manual measurements, the mean absolute percentage error (MAPE in the plant height and the plant fresh weight was 2.71% (1.08cm with an average plant height of 40.07cm and 10.06% (1.41g with an average plant fresh weight of 14.06g, respectively. The root mean square error (RMSE was 1.37cm and 1.79g for the plant height and plant fresh weight, respectively. The correlation coefficients were 0.95 and 0.96 for the plant height and plant fresh weight, respectively. Additionally, the proposed methodology, including plant reconstruction, model processing and trait extraction, required only approximately 20s on average per plant using parallel computing on a graphics processing unit (GPU, demonstrating that the methodology would be valuable for a high-throughput phenotyping platform.

  8. [Drainage variants in reconstructive and restorative operations for high strictures and injuries of the biliary tract].

    Science.gov (United States)

    Toskin, K D; Starosek, V N; Grinchesku, A E

    1990-10-01

    The article deals with the author's views on certain aspects of the problem of reconstructive and restorative surgery of the biliary tract. Original methods are suggested for external drainage (through the inferior surface of the right hepatic lobe in the region of the gallbladder seat and through the round ligament of the liver) in formation of ++hepato-hepatico- and hepaticojejunoanastomoses. Problems of operative techniques in formation of the anastomoses are discussed. Thirty-nine operations have been carried out in the clinic in the recent decade in high strictures and traumas of the biliary tract, 25 were reconstructive and 14 restorative. Postoperative mortality was 28.2% (11 patients). Intoxication and hepatargia associated with cholangiolytic abscesses of the liver were the main causes of death.

  9. High resolution reconstruction of PET images using the iterative OSEM algorithm

    International Nuclear Information System (INIS)

    Doll, J.; Bublitz, O.; Werling, A.; Haberkorn, U.; Semmler, W.; Adam, L.E.; Pennsylvania Univ., Philadelphia, PA; Brix, G.

    2004-01-01

    Aim: Improvement of the spatial resolution in positron emission tomography (PET) by incorporation of the image-forming characteristics of the scanner into the process of iterative image reconstruction. Methods: All measurements were performed at the whole-body PET system ECAT EXACT HR + in 3D mode. The acquired 3D sinograms were sorted into 2D sinograms by means of the Fourier rebinning (FORE) algorithm, which allows the usage of 2D algorithms for image reconstruction. The scanner characteristics were described by a spatially variant line-spread function (LSF), which was determined from activated copper-64 line sources. This information was used to model the physical degradation processes in PET measurements during the course of 2D image reconstruction with the iterative OSEM algorithm. To assess the performance of the high-resolution OSEM algorithm, phantom measurements performed at a cylinder phantom, the hotspot Jaszczack phantom, and the 3D Hoffmann brain phantom as well as different patient examinations were analyzed. Results: Scanner characteristics could be described by a Gaussian-shaped LSF with a full-width at half-maximum increasing from 4.8 mm at the center to 5.5 mm at a radial distance of 10.5 cm. Incorporation of the LSF into the iteration formula resulted in a markedly improved resolution of 3.0 and 3.5 mm, respectively. The evaluation of phantom and patient studies showed that the high-resolution OSEM algorithm not only lead to a better contrast resolution in the reconstructed activity distributions but also to an improved accuracy in the quantification of activity concentrations in small structures without leading to an amplification of image noise or even the occurrence of image artifacts. Conclusion: The spatial and contrast resolution of PET scans can markedly be improved by the presented image restauration algorithm, which is of special interest for the examination of both patients with brain disorders and small animals. (orig.)

  10. A Highly Accurate Regular Domain Collocation Method for Solving Potential Problems in the Irregular Doubly Connected Domains

    Directory of Open Access Journals (Sweden)

    Zhao-Qing Wang

    2014-01-01

    Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.

  11. Novel high accurate sensorless dual-axis solar tracking system controlled by maximum power point tracking unit of photovoltaic systems

    International Nuclear Information System (INIS)

    Fathabadi, Hassan

    2016-01-01

    Highlights: • Novel high accurate sensorless dual-axis solar tracker. • It has the advantages of both sensor based and sensorless solar trackers. • It does not have the disadvantages of sensor based and sensorless solar trackers. • Tracking error of only 0.11° that is less than the tracking errors of others. • An increase of 28.8–43.6% depending on the seasons in the energy efficiency. - Abstract: In this study, a novel high accurate sensorless dual-axis solar tracker controlled by the maximum power point tracking unit available in almost all photovoltaic systems is proposed. The maximum power point tracking controller continuously calculates the maximum output power of the photovoltaic module/panel/array, and uses the altitude and azimuth angles deviations to track the sun direction where the greatest value of the maximum output power is extracted. Unlike all other sensorless solar trackers, the proposed solar tracking system is a closed loop system which means it uses the actual direction of the sun at any time to track the sun direction, and this is the contribution of this work. The proposed solar tracker has the advantages of both sensor based and sensorless dual-axis solar trackers, but it does not have their disadvantages. Other sensorless solar trackers all are open loop, i.e., they use offline estimated data about the sun path in the sky obtained from solar map equations, so low exactness, cloudy sky, and requiring new data for new location are their problems. A photovoltaic system has been built, and it is experimentally verified that the proposed solar tracking system tracks the sun direction with the tracking error of 0.11° which is less than the tracking errors of other both sensor based and sensorless solar trackers. An increase of 28.8–43.6% depending on the seasons in the energy efficiency is the main advantage of utilizing the proposed solar tracking system.

  12. Highly sensitive capillary electrophoresis-mass spectrometry for rapid screening and accurate quantitation of drugs of abuse in urine.

    Science.gov (United States)

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-05-30

    The combination of capillary electrophoresis (CE) and mass spectrometry (MS) is particularly well adapted to bioanalysis due to its high separation efficiency, selectivity, and sensitivity; its short analytical time; and its low solvent and sample consumption. For clinical and forensic toxicology, a two-step analysis is usually performed: first, a screening step for compound identification, and second, confirmation and/or accurate quantitation in cases of presumed positive results. In this study, a fast and sensitive CE-MS workflow was developed for the screening and quantitation of drugs of abuse in urine samples. A CE with a time-of-flight MS (CE-TOF/MS) screening method was developed using a simple urine dilution and on-line sample preconcentration with pH-mediated stacking. The sample stacking allowed for a high loading capacity (20.5% of the capillary length), leading to limits of detection as low as 2 ng mL(-1) for drugs of abuse. Compound quantitation of positive samples was performed by CE-MS/MS with a triple quadrupole MS equipped with an adapted triple-tube sprayer and an electrospray ionization (ESI) source. The CE-ESI-MS/MS method was validated for two model compounds, cocaine (COC) and methadone (MTD), according to the Guidance of the Food and Drug Administration. The quantitative performance was evaluated for selectivity, response function, the lower limit of quantitation, trueness, precision, and accuracy. COC and MTD detection in urine samples was determined to be accurate over the range of 10-1000 ng mL(-1) and 21-1000 ng mL(-1), respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  14. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  15. Zooming in: high resolution 3D reconstruction of differently stained histological whole slide images

    Science.gov (United States)

    Lotz, Johannes; Berger, Judith; Müller, Benedikt; Breuhahn, Kai; Grabe, Niels; Heldmann, Stefan; Homeyer, André; Lahrmann, Bernd; Laue, Hendrik; Olesch, Janine; Schwier, Michael; Sedlaczek, Oliver; Warth, Arne

    2014-03-01

    Much insight into metabolic interactions, tissue growth, and tissue organization can be gained by analyzing differently stained histological serial sections. One opportunity unavailable to classic histology is three-dimensional (3D) examination and computer aided analysis of tissue samples. In this case, registration is needed to reestablish spatial correspondence between adjacent slides that is lost during the sectioning process. Furthermore, the sectioning introduces various distortions like cuts, folding, tearing, and local deformations to the tissue, which need to be corrected in order to exploit the additional information arising from the analysis of neighboring slide images. In this paper we present a novel image registration based method for reconstructing a 3D tissue block implementing a zooming strategy around a user-defined point of interest. We efficiently align consecutive slides at increasingly fine resolution up to cell level. We use a two-step approach, where after a macroscopic, coarse alignment of the slides as preprocessing, a nonlinear, elastic registration is performed to correct local, non-uniform deformations. Being driven by the optimization of the normalized gradient field (NGF) distance measure, our method is suitable for differently stained and thus multi-modal slides. We applied our method to ultra thin serial sections (2 μm) of a human lung tumor. In total 170 slides, stained alternately with four different stains, have been registered. Thorough visual inspection of virtual cuts through the reconstructed block perpendicular to the cutting plane shows accurate alignment of vessels and other tissue structures. This observation is confirmed by a quantitative analysis. Using nonlinear image registration, our method is able to correct locally varying deformations in tissue structures and exceeds the limitations of globally linear transformations.

  16. Impact of catheter reconstruction error on dose distribution in high dose rate intracavitary brachytherapy and evaluation of OAR doses

    International Nuclear Information System (INIS)

    Thaper, Deepak; Shukla, Arvind; Rathore, Narendra; Oinam, Arun S.

    2016-01-01

    In high dose rate brachytherapy (HDR-B), current catheter reconstruction protocols are relatively slow and error prone. The purpose of this study is to evaluate the impact of catheter reconstruction error on dose distribution in CT based intracavitary brachytherapy planning and evaluation of its effect on organ at risk (OAR) like bladder, rectum and sigmoid and target volume High risk clinical target volume (HR-CTV)

  17. High resolution reconstruction of monthly precipitation of Iberian Peninsula using circulation weather types

    Science.gov (United States)

    Cortesi, N.; Trigo, R.; Gonzalez-Hidalgo, J. C.; Ramos, A. M.

    2012-06-01

    Precipitation over the Iberian Peninsula (IP) is highly variable and shows large spatial contrasts between wet mountainous regions, to the north, and dry regions in the inland plains and southern areas. In this work, a high-density monthly precipitation dataset for the IP was coupled with a set of 26 atmospheric circulation weather types (Trigo and DaCamara, 2000) to reconstruct Iberian monthly precipitation from October to May with a very high resolution of 3030 precipitation series (overall mean density one station each 200 km2). A stepwise linear regression model with forward selection was used to develop monthly reconstructed precipitation series calibrated and validated over 1948-2003 period. Validation was conducted by means of a leave-one-out cross-validation over the calibration period. The results show a good model performance for selected months, with a mean coefficient of variation (CV) around 0.6 for validation period, being particularly robust over the western and central sectors of IP, while the predicted values in the Mediterranean and northern coastal areas are less acute. We show for three long stations (Lisbon, Madrid and Valencia) the comparison between model and original data as an example to how these models can be used in order to obtain monthly precipitation fields since the 1850s over most of IP for this very high density network.

  18. Accurate and High-Coverage Immune Repertoire Sequencing Reveals Characteristics of Antibody Repertoire Diversification in Young Children with Malaria

    Science.gov (United States)

    Jiang, Ning

    Accurately measuring the immune repertoire sequence composition, diversity, and abundance is important in studying repertoire response in infections, vaccinations, and cancer immunology. Using molecular identifiers (MIDs) to tag mRNA molecules is an effective method in improving the accuracy of immune repertoire sequencing (IR-seq). However, it is still difficult to use IR-seq on small amount of clinical samples to achieve a high coverage of the repertoire diversities. This is especially challenging in studying infections and vaccinations where B cell subpopulations with fewer cells, such as memory B cells or plasmablasts, are often of great interest to study somatic mutation patterns and diversity changes. Here, we describe an approach of IR-seq based on the use of MIDs in combination with a clustering method that can reveal more than 80% of the antibody diversity in a sample and can be applied to as few as 1,000 B cells. We applied this to study the antibody repertoires of young children before and during an acute malaria infection. We discovered unexpectedly high levels of somatic hypermutation (SHM) in infants and revealed characteristics of antibody repertoire development in young children that would have a profound impact on immunization in children.

  19. Tensor-decomposed vibrational coupled-cluster theory: Enabling large-scale, highly accurate vibrational-structure calculations

    Science.gov (United States)

    Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove

    2018-01-01

    A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.

  20. Improving Filtered Backprojection Reconstruction by Data-Dependent Filtering

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2014-01-01

    htmlabstractFiltered backprojection, one of the most widely used reconstruction methods in tomography, requires a large number of low-noise projections to yield accurate reconstructions. In many applications of tomography, complete projection data of high quality cannot be obtained, because of

  1. Iodine and freeze-drying enhanced high-resolution MicroCT imaging for reconstructing 3D intraneural topography of human peripheral nerve fascicles.

    Science.gov (United States)

    Yan, Liwei; Guo, Yongze; Qi, Jian; Zhu, Qingtang; Gu, Liqiang; Zheng, Canbin; Lin, Tao; Lu, Yutong; Zeng, Zitao; Yu, Sha; Zhu, Shuang; Zhou, Xiang; Zhang, Xi; Du, Yunfei; Yao, Zhi; Lu, Yao; Liu, Xiaolin

    2017-08-01

    The precise annotation and accurate identification of the topography of fascicles to the end organs are prerequisites for studying human peripheral nerves. In this study, we present a feasible imaging method that acquires 3D high-resolution (HR) topography of peripheral nerve fascicles using an iodine and freeze-drying (IFD) micro-computed tomography (microCT) method to greatly increase the contrast of fascicle images. The enhanced microCT imaging method can facilitate the reconstruction of high-contrast HR fascicle images, fascicle segmentation and extraction, feature analysis, and the tracing of fascicle topography to end organs, which define fascicle functions. The complex intraneural aggregation and distribution of fascicles is typically assessed using histological techniques or MR imaging to acquire coarse axial three-dimensional (3D) maps. However, the disadvantages of histological techniques (static, axial manual registration, and data instability) and MR imaging (low-resolution) limit these applications in reconstructing the topography of nerve fascicles. Thus, enhanced microCT is a new technique for acquiring 3D intraneural topography of the human peripheral nerve fascicles both to improve our understanding of neurobiological principles and to guide accurate repair in the clinic. Additionally, 3D microstructure data can be used as a biofabrication model, which in turn can be used to fabricate scaffolds to repair long nerve gaps. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A High-Resolution Reconstruction of Late-Holocene Relative Sea Level in Rhode Island, USA

    Science.gov (United States)

    Stearns, R. B.; Engelhart, S. E.; Kemp, A.; Cahill, N.; Halavik, B. T.; Corbett, D. R.; Brain, M.; Hill, T. D.

    2017-12-01

    Studies on the US Atlantic and Gulf coasts have utilized salt-marsh peats and the macro- and microfossils preserved within them to reconstruct high-resolution records of relative sea level (RSL). We followed this approach to investigate spatial and temporal RSL variability in southern New England, USA, by reconstructing 3,300 years of RSL change in lower Narragansett Bay, Rhode Island. After reconnaisance of lower Narragansett Bay salt marshes, we recovered a 3.4m core at Fox Hill Marsh on Conanicut Island. We enumerated foraminiferal assemblages at 3cm intervals throughout the length of the core and we assessed trends in δ13C at 5 cm resolution. We developed a composite chronology (average resolution of ±50 years for a 1 cm slice) using 30 AMS radiocarbon dates and historical chronological markers of known age (137Cs, heavy metals, Pb isotopes, pollen). We assessed core compaction (mechanical compression) by collecting compaction-free basal-peat samples and using a published decompaction model. We employed fossil foraminifera and bulk sediment δ13C to estimate paleomarsh elevation using a Bayesian transfer function trained by a previously-published regional modern foraminiferal dataset. We combined the proxy RSL reconstruction and local tide-gauge measurements from Newport, Rhode Island (1931 CE to present) and estimated past rates of RSL change using an Errors-in-Variables Integrated Gaussian Process (EIV-IGP) model. Both basal peats and the decompaction model suggest that our RSL record is not significantly compacted. RSL rose from -3.9 m at 1250 BCE reaching -0.4 m at 1850 CE (1 mm/yr). We removed a Glacial Isostatic Adjustment (GIA) contribution of 0.9 mm/yr based on a local GPS site to facilitate comparison to regional records. The detrended sea-level reconstruction shows multiple departures from stable sea level (0 mm/yr) over the last 3,300 years and agrees with prior reconstructions from the US Atlantic coast showing evidence for sea-level changes that

  3. Reduction of ring artefacts in high resolution micro-CT reconstructions

    International Nuclear Information System (INIS)

    Sijbers, Jan; Postnov, Andrei

    2004-01-01

    High resolution micro-CT images are often corrupted by ring artefacts, prohibiting quantitative analysis and hampering post processing. Removing or at least significantly reducing such artefacts is indispensable. However, since micro-CT systems are pushed to the extremes in the quest for the ultimate spatial resolution, ring artefacts can hardly be avoided. Moreover, as opposed to clinical CT systems, conventional correction schemes such as flat-field correction do not lead to satisfactory results. Therefore, in this note a simple but efficient and fast post processing method is proposed that effectively reduces ring artefacts in reconstructed μ-CT images. (note)

  4. Challenges of particle flow reconstruction in the CMS High-Granularity Calorimeter at the High-Luminosity LHC

    CERN Document Server

    Chlebana, Frank

    2016-01-01

    The challenges of the High-Luminosity LHC (HL-LHC) are driven by the large number of overlapping proton-proton collisions (pileup) in each bunch-crossing and the extreme radiation dose to detectors positioned at high pseudorapidity. To overcome this challenge CMS is designing and implementing an endcap electromagnetic+hadronic sampling calorimeter employing silicon pad devices in the electromagnetic and front hadronic sections, comprising over 6 million channels, and highly-segmented plastic scintillators in the rear part of the hadronic section. This High-Granularity Calorimeter (HGCAL) will be the first of its kind used in a colliding beam experiment. Clustering deposits of energy over many cells and layers is a complex and challenging computational task, particularly in the high-pileup and high-event-rate environment of HL-LHC. These challenges and their solutions will be discussed in detail, as well as their implementation in the HGCAL offline reconstruction. Baseline detector performance results will be ...

  5. High-Speed Surface Reconstruction of Flying Birds Using Structured Light

    Science.gov (United States)

    Deetjen, Marc; Lentink, David

    2017-11-01

    Birds fly effectively through complex environments, and in order to understand the strategies that enable them to do so, we need to determine the shape and movement of their wings. Previous studies show that even small perturbations in wing shape have dramatic aerodynamic effects, but these shape changes have not been quantified automatically at high temporal and spatial resolutions. Hence, we developed a custom 3D surface mapping method which uses a high-speed camera to view a grid of stripes projected onto a flying bird. Because the light is binary rather than grayscale, and each frame is separately analyzed, this method can function at any frame rate with sufficient light. The method is automated, non-invasive, and able to measure a volume by simultaneously reconstructing from multiple views. We use this technique to reconstruct the 3D shape of the surface of a parrotlet during flapping flight at 3200 fps. We then analyze key dynamic parameters such as wing twist and angle of attack, and compute aerodynamic parameters such as lift and drag. While this novel system is designed to quantify bird wing shape and motion, it is adaptable for tracking other objects such as quickly deforming fish, especially those which are difficult to reconstruct using other 3D tracking methods. The presenter needs to leave by 3 pm on the final day of the conference (11/21) in order to make his flight. Please account for this in the scheduling if possible by scheduling the presentation earlier in the day or a different day.

  6. Automatic cortical surface reconstruction of high-resolution T1 echo planar imaging data.

    Science.gov (United States)

    Renvall, Ville; Witzel, Thomas; Wald, Lawrence L; Polimeni, Jonathan R

    2016-07-01

    Echo planar imaging (EPI) is the method of choice for the majority of functional magnetic resonance imaging (fMRI), yet EPI is prone to geometric distortions and thus misaligns with conventional anatomical reference data. The poor geometric correspondence between functional and anatomical data can lead to severe misplacements and corruption of detected activation patterns. However, recent advances in imaging technology have provided EPI data with increasing quality and resolution. Here we present a framework for deriving cortical surface reconstructions directly from high-resolution EPI-based reference images that provide anatomical models exactly geometric distortion-matched to the functional data. Anatomical EPI data with 1mm isotropic voxel size were acquired using a fast multiple inversion recovery time EPI sequence (MI-EPI) at 7T, from which quantitative T1 maps were calculated. Using these T1 maps, volumetric data mimicking the tissue contrast of standard anatomical data were synthesized using the Bloch equations, and these T1-weighted data were automatically processed using FreeSurfer. The spatial alignment between T2(⁎)-weighted EPI data and the synthetic T1-weighted anatomical MI-EPI-based images was improved compared to the conventional anatomical reference. In particular, the alignment near the regions vulnerable to distortion due to magnetic susceptibility differences was improved, and sampling of the adjacent tissue classes outside of the cortex was reduced when using cortical surface reconstructions derived directly from the MI-EPI reference. The MI-EPI method therefore produces high-quality anatomical data that can be automatically segmented with standard software, providing cortical surface reconstructions that are geometrically matched to the BOLD fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    Science.gov (United States)

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  8. Reconstruction of High Mass $t\\overline{t}$ Resonances in the Lepton+Jets Channel

    CERN Document Server

    The ATLAS Collaboration

    2009-01-01

    At the LHC, highly energetic pp collisions are expected to be the source of new experimental phenomenology. Top quarks will notably be produced with high transverse momenta for the very first time, leaving in the detector an unusual signature. Indeed, hadronic top quark decay products can be so close together in the detector that they are reconstructed as a single jet, and semi-leptonic top quark decays can no longer count on the presence of a truly isolated lepton for their identification. This note describes the use of new experimental techniques in the identification of these objects as part of a realistic analysis for a high mass tt resonance search with the ATLAS detector.

  9. A Track Reconstructing Low-latency Trigger Processor for High-energy Physics

    CERN Document Server

    AUTHOR|(CDS)2067518

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 µs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbps via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's dr...

  10. Mechanical versus kinematical shortening reconstructions of the Zagros High Folded Zone (Kurdistan region of Iraq)

    Science.gov (United States)

    Frehner, Marcel; Reif, Daniel; Grasemann, Bernhard

    2012-06-01

    This paper compares kinematical and mechanical techniques for the palinspastic reconstruction of folded cross sections in collision orogens. The studied area and the reconstructed NE-SW trending, 55.5 km long cross section is located in the High Folded Zone of the Zagros fold-and-thrust belt in the Kurdistan region of Iraq. The present-day geometry of the cross section has been constructed from field as well as remote sensing data. In a first step, the structures and the stratigraphy are simplified and summarized in eight units trying to identify the main geometric and mechanical parameters. In a second step, the shortening is kinematically estimated using the dip domain method to 11%-15%. Then the same cross section is used in a numerical finite element model to perform dynamical unfolding simulations taking various rheological parameters into account. The main factor allowing for an efficient dynamic unfolding is the presence of interfacial slip conditions between the mechanically strong units. Other factors, such as Newtonian versus power law viscous rheology or the presence of a basement, affect the numerical simulations much less strongly. If interfacial slip is accounted for, fold amplitudes are reduced efficiently during the dynamical unfolding simulations, while welded layer interfaces lead to unrealistic shortening estimates. It is suggested that interfacial slip and decoupling of the deformation along detachment horizons is an important mechanical parameter that controlled the folding processes in the Zagros High Folded Zone.

  11. High resolution reconstruction of monthly autumn and winter precipitation of Iberian Peninsula for last 150 years.

    Science.gov (United States)

    Cortesi, N.; Trigo, R.; González-Hidalgo, J. C.; Ramos, A.

    2012-04-01

    Precipitation over Iberian Peninsula (IP) presents large values of interannual variability and large spatial contrasts between wet mountainous regions in the north and dry regions in the southern plains. Unlike other European regions, IP was poorly monitored for precipitation during 19th century. Here we present a new approach to fill this gap. A set of 26 atmospheric circulation weather types (Trigo R.M. and DaCamara C.C., 2000) derived from a recent SLP dataset, the EMULATE (European and North Atlantic daily to multidecadal climate variability) Project, was used to reconstruct Iberian monthly precipitation from October to March during 1851-1947. Principal Component Regression Analysis was chosen to develop monthly precipitation reconstruction back to 1851 and calibrated over 1948-2003 period for 3030 monthly precipitation series of high-density homogenized MOPREDAS (Monthly Precipitation Database for Spain and Portugal) database. Validation was conducted over 1920-1947 at 15 key site locations. Results show high model performance for selected months, with a mean coefficient of variation (CV) around 0.6 during validation period. Lower CV values were achieved in western area of IP. Trigo, R. M., and DaCamara, C.C., 2000: "Circulation weather types and their impact on the precipitation regime in Portugal". Int. J. Climatol., 20, 1559-1581.

  12. Accurate determination of non-metallic impurities in high purity tetramethylammonium hydroxide using inductively coupled plasma tandem mass spectrometry

    Science.gov (United States)

    Fu, Liang; Xie, Hualin; Shi, Shuyun; Chen, Xiaoqing

    2018-06-01

    The content of non-metallic impurities in high-purity tetramethylammonium hydroxide (HPTMAH) aqueous solution has an important influence on the yield, electrical properties and reliability of the integrated circuit during the process of chip etching and cleaning. Therefore, an efficient analytical method to directly quantify the content of non-metallic impurities in HPTMAH aqueous solutions is necessary. The present study was aimed to develop a novel method that can accurately determine seven non-metallic impurities (B, Si, P, S, Cl, As, and Se) in an aqueous solution of HPTMAH by inductively coupled plasma tandem mass spectrometry (ICP-MS/MS). The samples were measured using a direct injection method. In the MS/MS mode, oxygen and hydrogen were used as reaction gases in the octopole reaction system (ORS) to eliminate mass spectral interferences during the analytical process. The detection limits of B, Si, P, S, Cl, As, and Se were 0.31, 0.48, 0.051, 0.27, 3.10, 0.008, and 0.005 μg L-1, respectively. The samples were analyzed by the developed method and the sector field inductively coupled plasma mass spectrometry (SF-ICP-MS) was used for contrastive analysis. The values of these seven elements measured using ICP-MS/MS were consistent with those measured by SF-ICP-MS. The proposed method can be utilized to analyze non-metallic impurities in HPTMAH aqueous solution. Table S2 Multiple potential interferences on the analytes. Table S3 Parameters of calibration curve and the detection limit (DL). Table S4 Results obtained for 25% concentration high-purity grade TMAH aqueous solution samples (μg L-1, mean ± standard deviation, n = 10).

  13. A standardized framework for accurate, high-throughput genotyping of recombinant and non-recombinant viral sequences.

    Science.gov (United States)

    Alcantara, Luiz Carlos Junior; Cassol, Sharon; Libin, Pieter; Deforche, Koen; Pybus, Oliver G; Van Ranst, Marc; Galvão-Castro, Bernardo; Vandamme, Anne-Mieke; de Oliveira, Tulio

    2009-07-01

    Human immunodeficiency virus type-1 (HIV-1), hepatitis B and C and other rapidly evolving viruses are characterized by extremely high levels of genetic diversity. To facilitate diagnosis and the development of prevention and treatment strategies that efficiently target the diversity of these viruses, and other pathogens such as human T-lymphotropic virus type-1 (HTLV-1), human herpes virus type-8 (HHV8) and human papillomavirus (HPV), we developed a rapid high-throughput-genotyping system. The method involves the alignment of a query sequence with a carefully selected set of pre-defined reference strains, followed by phylogenetic analysis of multiple overlapping segments of the alignment using a sliding window. Each segment of the query sequence is assigned the genotype and sub-genotype of the reference strain with the highest bootstrap (>70%) and bootscanning (>90%) scores. Results from all windows are combined and displayed graphically using color-coded genotypes. The new Virus-Genotyping Tools provide accurate classification of recombinant and non-recombinant viruses and are currently being assessed for their diagnostic utility. They have incorporated into several HIV drug resistance algorithms including the Stanford (http://hivdb.stanford.edu) and two European databases (http://www.umcutrecht.nl/subsite/spread-programme/ and http://www.hivrdb.org.uk/) and have been successfully used to genotype a large number of sequences in these and other databases. The tools are a PHP/JAVA web application and are freely accessible on a number of servers including: http://bioafrica.mrc.ac.za/rega-genotype/html/, http://lasp.cpqgm.fiocruz.br/virus-genotype/html/, http://jose.med.kuleuven.be/genotypetool/html/.

  14. Reconstruction of high temporal resolution Thomson scattering data during a modulated electron cyclotron resonance heating using conditional averaging

    International Nuclear Information System (INIS)

    Kobayashi, T.; Yoshinuma, M.; Ohdachi, S.; Ida, K.; Itoh, K.; Moon, C.; Yamada, I.; Funaba, H.; Yasuhara, R.; Tsuchiya, H.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Kubo, S.; Tsujimura, T. I.; Inagaki, S.

    2016-01-01

    This paper provides a software application of the sampling scope concept for fusion research. The time evolution of Thomson scattering data is reconstructed with a high temporal resolution during a modulated electron cyclotron resonance heating (MECH) phase. The amplitude profile and the delay time profile of the heat pulse propagation are obtained from the reconstructed signal for discharges having on-axis and off-axis MECH depositions. The results are found to be consistent with the MECH deposition.

  15. Reconstruction of high temporal resolution Thomson scattering data during a modulated electron cyclotron resonance heating using conditional averaging

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, T., E-mail: kobayashi.tatsuya@LHD.nifs.ac.jp; Yoshinuma, M.; Ohdachi, S. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Toki 509-5292 (Japan); Ida, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); SOKENDAI (The Graduate University for Advanced Studies), Toki 509-5292 (Japan); Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Itoh, K. [National Institute for Fusion Science, Toki 509-5292 (Japan); Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Moon, C.; Yamada, I.; Funaba, H.; Yasuhara, R.; Tsuchiya, H.; Yoshimura, Y.; Igami, H.; Shimozuma, T.; Kubo, S.; Tsujimura, T. I. [National Institute for Fusion Science, Toki 509-5292 (Japan); Inagaki, S. [Research Center for Plasma Turbulence, Kyushu University, Kasuga 816-8580 (Japan); Research Institute for Applied Mechanics, Kyushu University, Kasuga 816-8580 (Japan)

    2016-04-15

    This paper provides a software application of the sampling scope concept for fusion research. The time evolution of Thomson scattering data is reconstructed with a high temporal resolution during a modulated electron cyclotron resonance heating (MECH) phase. The amplitude profile and the delay time profile of the heat pulse propagation are obtained from the reconstructed signal for discharges having on-axis and off-axis MECH depositions. The results are found to be consistent with the MECH deposition.

  16. Highly accelerated acquisition and homogeneous image reconstruction with rotating RF coil array at 7T-A phantom based study.

    Science.gov (United States)

    Li, Mingyan; Zuo, Zhentao; Jin, Jin; Xue, Rong; Trakic, Adnan; Weber, Ewald; Liu, Feng; Crozier, Stuart

    2014-03-01

    Parallel imaging (PI) is widely used for imaging acceleration by means of coil spatial sensitivities associated with phased array coils (PACs). By employing a time-division multiplexing technique, a single-channel rotating radiofrequency coil (RRFC) provides an alternative method to reduce scan time. Strategically combining these two concepts could provide enhanced acceleration and efficiency. In this work, the imaging acceleration ability and homogeneous image reconstruction strategy of 4-element rotating radiofrequency coil array (RRFCA) was numerically investigated and experimental validated at 7T with a homogeneous phantom. Each coil of RRFCA was capable of acquiring a large number of sensitivity profiles, leading to a better acceleration performance illustrated by the improved geometry-maps that have lower maximum values and more uniform distributions compared to 4- and 8-element stationary arrays. A reconstruction algorithm, rotating SENSitivity Encoding (rotating SENSE), was proposed to provide image reconstruction. Additionally, by optimally choosing the angular sampling positions and transmit profiles under the rotating scheme, phantom images could be faithfully reconstructed. The results indicate that, the proposed technique is able to provide homogeneous reconstructions with overall higher and more uniform signal-to-noise ratio (SNR) distributions at high reduction factors. It is hoped that, by employing the high imaging acceleration and homogeneous imaging reconstruction ability of RRFCA, the proposed method will facilitate human imaging for ultra high field MRI. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Resolution-recovery-embedded image reconstruction for a high-resolution animal SPECT system.

    Science.gov (United States)

    Zeraatkar, Navid; Sajedi, Salar; Farahani, Mohammad Hossein; Arabi, Hossein; Sarkar, Saeed; Ghafarian, Pardis; Rahmim, Arman; Ay, Mohammad Reza

    2014-11-01

    The small-animal High-Resolution SPECT (HiReSPECT) is a dedicated dual-head gamma camera recently designed and developed in our laboratory for imaging of murine models. Each detector is composed of an array of 1.2 × 1.2 mm(2) (pitch) pixelated CsI(Na) crystals. Two position-sensitive photomultiplier tubes (H8500) are coupled to each head's crystal. In this paper, we report on a resolution-recovery-embedded image reconstruction code applicable to the system and present the experimental results achieved using different phantoms and mouse scans. Collimator-detector response functions (CDRFs) were measured via a pixel-driven method using capillary sources at finite distances from the head within the field of view (FOV). CDRFs were then fitted by independent Gaussian functions. Thereafter, linear interpolations were applied to the standard deviation (σ) values of the fitted Gaussians, yielding a continuous map of CDRF at varying distances from the head. A rotation-based maximum-likelihood expectation maximization (MLEM) method was used for reconstruction. A fast rotation algorithm was developed to rotate the image matrix according to the desired angle by means of pre-generated rotation maps. The experiments demonstrated improved resolution utilizing our resolution-recovery-embedded image reconstruction. While the full-width at half-maximum (FWHM) radial and tangential resolution measurements of the system were over 2 mm in nearly all positions within the FOV without resolution recovery, reaching around 2.5 mm in some locations, they fell below 1.8 mm everywhere within the FOV using the resolution-recovery algorithm. The noise performance of the system was also acceptable; the standard deviation of the average counts per voxel in the reconstructed images was 6.6% and 8.3% without and with resolution recovery, respectively. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion

    Science.gov (United States)

    Huynh, H. T.

    2009-01-01

    We introduce a new approach to high-order accuracy for the numerical solution of diffusion problems by solving the equations in differential form using a reconstruction technique. The approach has the advantages of simplicity and economy. It results in several new high-order methods including a simplified version of discontinuous Galerkin (DG). It also leads to new definitions of common value and common gradient quantities at each interface shared by the two adjacent cells. In addition, the new approach clarifies the relations among the various choices of new and existing common quantities. Fourier stability and accuracy analyses are carried out for the resulting schemes. Extensions to the case of quadrilateral meshes are obtained via tensor products. For the two-point boundary value problem (steady state), it is shown that these schemes, which include most popular DG methods, yield exact common interface quantities as well as exact cell average solutions for nearly all cases.

  19. Accuracy of applicator tip reconstruction in MRI-guided interstitial 192Ir-high-dose-rate brachytherapy of liver tumors

    International Nuclear Information System (INIS)

    Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens

    2015-01-01

    Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo

  20. Return-to-activity after anatomical reconstruction of acute high-grade acromioclavicular separation.

    Science.gov (United States)

    Saier, T; Plath, J E; Beitzel, K; Minzlaff, P; Feucht, J M; Reuter, S; Martetschläger, F; Imhoff, Andreas B; Aboalata, M; Braun, S

    2016-04-02

    To evaluate return-to-activity (RtA) after anatomical reconstruction of acute high-grade acromioclavicular joint (ACJ) separation. A total of 42 patients with anatomical reconstruction of acute high-grade ACJ-separation (Rockwood Type V) were surveyed to determine RtA at a mean 31 months follow-up (f-u). Sports disciplines, intensity, level of competition, participation in overhead and/or contact sports, as well as activity scales (DASH-Sport-Module, Tegner Activity Scale) were evaluated. Functional outcome evaluation included Constant score and QuickDASH. All patients (42/42) participated in sporting activities at f-u. Neither participation in overhead/contact sports, nor level of activity declined significantly (n.s.). 62 % (n = 26) of patients reported subjective sports specific ACJ integrity to be at least the same as prior to the trauma. Sporting intensity (hours/week: 7.3 h to 5.4 h, p = .004) and level of competition (p = .02) were reduced. If activity changed, in 50 % other reasons but clinical symptoms/impairment were named for modified behavior. QuickDASH (mean 6, range 0-54, SD 11) and DASH-Sport-Module (mean 6, range 0-56, SD 13) revealed only minor disabilities at f-u. Over time Constant score improved significant to an excellent score (mean 94, range 86-100, SD 4; p < .001). Functional outcome was not correlated with RtA (n.s.). All patients participated in sporting activities after anatomical reconstruction of high-grade (Rockwood Type V) ACJ-separation. With a high functional outcome there was no significant change in activity level (Tegner) and participation in overhead and/or contact sports observed. There was no correlation between functional outcome and RtA. Limiting, there were alterations in time spent for sporting activities and level of competition observed. But in 50 % those were not related to ACJ symptoms/impairment. Unrelated to successful re-established integrity and function of the ACJ it should be considered that

  1. Detector response restoration in image reconstruction of high resolution positron emission tomography

    International Nuclear Information System (INIS)

    Liang, Z.

    1994-01-01

    A mathematical method was studied to model the detector response of high spatial-resolution positron emission tomography systems consisting of close-packed small crystals, and to restore the resolution deteriorated due to crystal penetration and/or nonuniform sampling across the field-of-view (FOV). The simulated detector system had 600 bismuth germanate crystals of 3.14 mm width and 30 mm length packed on a single ring of 60 cm diameter. The space between crystal was filled up with lead. Each crystal was in coincidence with 200 opposite crystals so that the FOV had a radius of 30 cm. The detector response was modeled based on the attenuating properties of the crystals and the septa, as well as the geometry of the detector system. The modeled detector-response function was used to restore the projections from the sinogram of the ring-detector system. The restored projections had a uniform sampling of 1.57 mm across the FOV. The crystal penetration and/or the nonuniform sampling were compensated in the projections. A penalized maximum-likelihood algorithm was employed to accomplish the restoration. The restored projections were then filtered and backprojected to reconstruct the image. A chest phantom with a few small circular ''cold'' objects located at the center and near the periphery of FOV was computer generated and used to test the restoration. The reconstructed images from the restored projections demonstrated resolution improvement off the FOV center, while preserving the resolution near the center

  2. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  3. Simultaneous reconstruction of thermal degradation properties for anisotropic scattering fibrous insulation after high temperature thermal exposures

    International Nuclear Information System (INIS)

    Zhao, Shuyuan; Zhang, Wenjiao; He, Xiaodong; Li, Jianjun; Yao, Yongtao; Lin, Xiu

    2015-01-01

    To probe thermal degradation behavior of fibrous insulation for long-term service, an inverse analysis model was developed to simultaneously reconstruct thermal degradation properties of fibers after thermal exposures from the experimental thermal response data, by using the measured infrared spectral transmittance and X-ray phase analysis data as direct inputs. To take into account the possible influence of fibers degradation after thermal exposure on the conduction heat transfer, we introduced a new parameter in the thermal conductivity model. The effect of microstructures on the thermal degradation parameters was evaluated. It was found that after high temperature thermal exposure the decay rate of the radiation intensity passing through the material was weakened, and the probability of being scattered decreased during the photons traveling in the medium. The fibrous medium scattered more radiation into the forward directions. The shortened heat transfer path due to possible mechanical degradation, along with the enhancement of mean free path of phonon scattering as devitrification after severe heat treatment, made the coupled solid/gas thermal conductivities increase with the rise of heat treatment temperature. - Highlights: • A new model is developed to probe conductive and radiative properties degradation of fibers. • To characterize mechanical degradation, a new parameter is introduced in the model. • Thermal degradation properties are reconstructed from experiments by L–M algorithm. • The effect of microstructures on the thermal degradation parameters is evaluated. • The analysis provides a powerful tool to quantify thermal degradation of fiber medium

  4. A pseudo-discrete algebraic reconstruction technique (PDART) prior image-based suppression of high density artifacts in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    2016-12-21

    We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images. - Highlights: • An accelerated reconstruction method, PDART, is proposed for exterior problems. • With a few iterations, soft prior image was reconstructed from the exterior data. • PDART framework has enabled an efficient hybrid metal artifact reduction in CT.

  5. Low- Versus High-Intensity Plyometric Exercise During Rehabilitation After Anterior Cruciate Ligament Reconstruction.

    Science.gov (United States)

    Chmielewski, Terese L; George, Steven Z; Tillman, Susan M; Moser, Michael W; Lentz, Trevor A; Indelicato, Peter A; Trumble, Troy N; Shuster, Jonathan J; Cicuttini, Flavia M; Leeuwenburgh, Christiaan

    2016-03-01

    Plyometric exercise is used during rehabilitation after anterior cruciate ligament (ACL) reconstruction to facilitate the return to sports participation. However, clinical outcomes have not been examined, and high loads on the lower extremity could be detrimental to knee articular cartilage. To compare the immediate effect of low- and high-intensity plyometric exercise during rehabilitation after ACL reconstruction on knee function, articular cartilage metabolism, and other clinically relevant measures. Randomized controlled trial; Level of evidence, 2. Twenty-four patients who underwent unilateral ACL reconstruction (mean, 14.3 weeks after surgery; range, 12.1-17.7 weeks) were assigned to 8 weeks (16 visits) of low- or high-intensity plyometric exercise consisting of running, jumping, and agility activities. Groups were distinguished by the expected magnitude of vertical ground-reaction forces. Testing was conducted before and after the intervention. Primary outcomes were self-reported knee function (International Knee Documentation Committee [IKDC] subjective knee form) and a biomarker of articular cartilage degradation (urine concentrations of crosslinked C-telopeptide fragments of type II collagen [uCTX-II]). Secondary outcomes included additional biomarkers of articular cartilage metabolism (urinary concentrations of the neoepitope of type II collagen cleavage at the C-terminal three-quarter-length fragment [uC2C], serum concentrations of the C-terminal propeptide of newly formed type II collagen [sCPII]) and inflammation (tumor necrosis factor-α), functional performance (maximal vertical jump and single-legged hop), knee impairments (anterior knee laxity, average knee pain intensity, normalized quadriceps strength, quadriceps symmetry index), and psychosocial status (kinesiophobia, knee activity self-efficacy, pain catastrophizing). The change in each measure was compared between groups. Values before and after the intervention were compared with the groups

  6. Maximum likelihood phylogenetic reconstruction from high-resolution whole-genome data and a tree of 68 eukaryotes.

    Science.gov (United States)

    Lin, Yu; Hu, Fei; Tang, Jijun; Moret, Bernard M E

    2013-01-01

    The rapid accumulation of whole-genome data has renewed interest in the study of the evolution of genomic architecture, under such events as rearrangements, duplications, losses. Comparative genomics, evolutionary biology, and cancer research all require tools to elucidate the mechanisms, history, and consequences of those evolutionary events, while phylogenetics could use whole-genome data to enhance its picture of the Tree of Life. Current approaches in the area of phylogenetic analysis are limited to very small collections of closely related genomes using low-resolution data (typically a few hundred syntenic blocks); moreover, these approaches typically do not include duplication and loss events. We describe a maximum likelihood (ML) approach for phylogenetic analysis that takes into account genome rearrangements as well as duplications, insertions, and losses. Our approach can handle high-resolution genomes (with 40,000 or more markers) and can use in the same analysis genomes with very different numbers of markers. Because our approach uses a standard ML reconstruction program (RAxML), it scales up to large trees. We present the results of extensive testing on both simulated and real data showing that our approach returns very accurate results very quickly. In particular, we analyze a dataset of 68 high-resolution eukaryotic genomes, with from 3,000 to 42,000 genes, from the eGOB database; the analysis, including bootstrapping, takes just 3 hours on a desktop system and returns a tree in agreement with all well supported branches, while also suggesting resolutions for some disputed placements.

  7. Task-based statistical image reconstruction for high-quality cone-beam CT

    Science.gov (United States)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

  8. Improving High-Throughput Sequencing Approaches for Reconstructing the Evolutionary Dynamics of Upper Paleolithic Human Groups

    DEFF Research Database (Denmark)

    Seguin-Orlando, Andaine

    the development and testing of innovative molecular approaches aiming at improving the amount of informative HTS data one can recover from ancient DNA extracts. We have characterized important ligation and amplification biases in the sequencing library building and enrichment steps, which can impede further...... been mainly driven by the development of High-Throughput DNA Sequencing (HTS) technologies but also by the implementation of novel molecular tools tailored to the manipulation of ultra short and damaged DNA molecules. Our ability to retrieve traces of genetic material has tremendously improved, pushing......, that impact on the overall efficacy of the method. In a second part, we implemented some of these molecular tools to the processing of five Upper Paleolithic human samples from the Kostenki and Sunghir sites in Western Eurasia, in order to reconstruct the deep genomic history of European populations...

  9. Registration-based approach for reconstruction of high-resolution in utero fetal MR brain images.

    Science.gov (United States)

    Rousseau, Francois; Glenn, Orit A; Iordanova, Bistra; Rodriguez-Carranza, Claudia; Vigneron, Daniel B; Barkovich, James A; Studholme, Colin

    2006-09-01

    This paper describes a novel approach to forming high-resolution MR images of the human fetal brain. It addresses the key problem of fetal motion by proposing a registration-refined compounding of multiple sets of orthogonal fast two-dimensional MRI slices, which are currently acquired for clinical studies, into a single high-resolution MRI volume. A robust multiresolution slice alignment is applied iteratively to the data to correct motion of the fetus that occurs between two-dimensional acquisitions. This is combined with an intensity correction step and a super-resolution reconstruction step, to form a single high isotropic resolution volume of the fetal brain. Experimental validation on synthetic image data with known motion types and underlying anatomy, together with retrospective application to sets of clinical acquisitions, are included. Results indicate that this method promises a unique route to acquiring high-resolution MRI of the fetal brain in vivo allowing comparable quality to that of neonatal MRI. Such data provide a highly valuable window into the process of normal and abnormal brain development, which is directly applicable in a clinical setting.

  10. Low Overpotential and High Current CO2 Reduction with Surface Reconstructed Cu Foam Electrodess

    KAUST Repository

    Min, Shixiong

    2016-06-23

    While recent reports have demonstrated that oxide-derived Cu-based electrodes exhibit high selectivity for CO2 reduction at low overpotential, the low catalytic current density (<2 mA/cm2 at -0.45 V vs. RHE) still largely limits its applications for large-scale fuel synthesis. Here we report an extremely high current density for CO2 reduction at low overpotential using a Cu foam electrode prepared by air-oxidation and subsequent electroreduction. Apart from possessing three-dimensional (3D) open frameworks, the resulting Cu foam electrodes prepared at higher temperatures exhibit enhanced electrochemically active surface area and distinct surface structures. In particular, the Cu foam electrode prepared at 500 °C exhibits an extremely high geometric current density of ~9.4 mA/cm2 in CO2-satrurated 0.1 M KHCO3 aqueous solution and achieving ~39% CO and ~23% HCOOH Faradaic efficiencies at -0.45 V vs. RHE. The high activity and significant selectivity enhancement are attributable to the formation of abundant grain-boundary supported active sites and preferable (100) and (111) facets as a result of reconstruction of Cu surface facets. This work demonstrates that the structural integration of Cu foam with open 3D frameworks and the favorable surface structures is a promising strategy to develop an advanced Cu electrocatalyst that can operate at high current density and low overpotential for CO2 reduction.

  11. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    OpenAIRE

    Kotasidis Fotis A.; Kotasidis Fotis A.; Angelis Georgios I.; Anton-Rodriguez Jose; Matthews Julian C.; Reader Andrew J.; Reader Andrew J.; Zaidi Habib; Zaidi Habib; Zaidi Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However due to the short half life of clinically used isotopes other long lived isotopes not used in clinical practice are used to perform the PSF measurements. As such non optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction usuall...

  12. A distributed multi-GPU system for high speed electron microscopic tomographic reconstruction

    International Nuclear Information System (INIS)

    Zheng, Shawn Q.; Branlund, Eric; Kesthelyi, Bettina; Braunfeld, Michael B.; Cheng, Yifan; Sedat, John W.; Agard, David A.

    2011-01-01

    Full resolution electron microscopic tomographic (EMT) reconstruction of large-scale tilt series requires significant computing power. The desire to perform multiple cycles of iterative reconstruction and realignment dramatically increases the pressing need to improve reconstruction performance. This has motivated us to develop a distributed multi-GPU (graphics processing unit) system to provide the required computing power for rapid constrained, iterative reconstructions of very large three-dimensional (3D) volumes. The participating GPUs reconstruct segments of the volume in parallel, and subsequently, the segments are assembled to form the complete 3D volume. Owing to its power and versatility, the CUDA (NVIDIA, USA) platform was selected for GPU implementation of the EMT reconstruction. For a system containing 10 GPUs provided by 5 GTX295 cards, 10 cycles of SIRT reconstruction for a tomogram of 4096 2 x512 voxels from an input tilt series containing 122 projection images of 4096 2 pixels (single precision float) takes a total of 1845 s of which 1032 s are for computation with the remainder being the system overhead. The same system takes only 39 s total to reconstruct 1024 2 x256 voxels from 122 1024 2 pixel projections. While the system overhead is non-trivial, performance analysis indicates that adding extra GPUs to the system would lead to steadily enhanced overall performance. Therefore, this system can be easily expanded to generate superior computing power for very large tomographic reconstructions and especially to empower iterative cycles of reconstruction and realignment. -- Highlights: → A distributed multi-GPU system has been developed for electron microscopic tomography (EMT). → This system allows for rapid constrained, iterative reconstruction of very large volumes. → This system can be easily expanded to generate superior computing power for large-scale iterative EMT realignment.

  13. A distributed multi-GPU system for high speed electron microscopic tomographic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Shawn Q.; Branlund, Eric; Kesthelyi, Bettina; Braunfeld, Michael B.; Cheng, Yifan; Sedat, John W. [The Howard Hughes Medical Institute and the W.M. Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, 600, 16th Street, Room S412D, CA 94158-2517 (United States); Agard, David A., E-mail: agard@msg.ucsf.edu [The Howard Hughes Medical Institute and the W.M. Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, 600, 16th Street, Room S412D, CA 94158-2517 (United States)

    2011-07-15

    Full resolution electron microscopic tomographic (EMT) reconstruction of large-scale tilt series requires significant computing power. The desire to perform multiple cycles of iterative reconstruction and realignment dramatically increases the pressing need to improve reconstruction performance. This has motivated us to develop a distributed multi-GPU (graphics processing unit) system to provide the required computing power for rapid constrained, iterative reconstructions of very large three-dimensional (3D) volumes. The participating GPUs reconstruct segments of the volume in parallel, and subsequently, the segments are assembled to form the complete 3D volume. Owing to its power and versatility, the CUDA (NVIDIA, USA) platform was selected for GPU implementation of the EMT reconstruction. For a system containing 10 GPUs provided by 5 GTX295 cards, 10 cycles of SIRT reconstruction for a tomogram of 4096{sup 2}x512 voxels from an input tilt series containing 122 projection images of 4096{sup 2} pixels (single precision float) takes a total of 1845 s of which 1032 s are for computation with the remainder being the system overhead. The same system takes only 39 s total to reconstruct 1024{sup 2}x256 voxels from 122 1024{sup 2} pixel projections. While the system overhead is non-trivial, performance analysis indicates that adding extra GPUs to the system would lead to steadily enhanced overall performance. Therefore, this system can be easily expanded to generate superior computing power for very large tomographic reconstructions and especially to empower iterative cycles of reconstruction and realignment. -- Highlights: {yields} A distributed multi-GPU system has been developed for electron microscopic tomography (EMT). {yields} This system allows for rapid constrained, iterative reconstruction of very large volumes. {yields} This system can be easily expanded to generate superior computing power for large-scale iterative EMT realignment.

  14. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  15. Regularization design for high-quality cone-beam CT of intracranial hemorrhage using statistical reconstruction

    Science.gov (United States)

    Dang, H.; Stayman, J. W.; Xu, J.; Sisniega, A.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.

    2016-03-01

    Intracranial hemorrhage (ICH) is associated with pathologies such as hemorrhagic stroke and traumatic brain injury. Multi-detector CT is the current front-line imaging modality for detecting ICH (fresh blood contrast 40-80 HU, down to 1 mm). Flat-panel detector (FPD) cone-beam CT (CBCT) offers a potential alternative with a smaller scanner footprint, greater portability, and lower cost potentially well suited to deployment at the point of care outside standard diagnostic radiology and emergency room settings. Previous studies have suggested reliable detection of ICH down to 3 mm in CBCT using high-fidelity artifact correction and penalized weighted least-squared (PWLS) image reconstruction with a post-artifact-correction noise model. However, ICH reconstructed by traditional image regularization exhibits nonuniform spatial resolution and noise due to interaction between the statistical weights and regularization, which potentially degrades the detectability of ICH. In this work, we propose three regularization methods designed to overcome these challenges. The first two compute spatially varying certainty for uniform spatial resolution and noise, respectively. The third computes spatially varying regularization strength to achieve uniform "detectability," combining both spatial resolution and noise in a manner analogous to a delta-function detection task. Experiments were conducted on a CBCT test-bench, and image quality was evaluated for simulated ICH in different regions of an anthropomorphic head. The first two methods improved the uniformity in spatial resolution and noise compared to traditional regularization. The third exhibited the highest uniformity in detectability among all methods and best overall image quality. The proposed regularization provides a valuable means to achieve uniform image quality in CBCT of ICH and is being incorporated in a CBCT prototype for ICH imaging.

  16. Development of salt-tolerance interface for an high performance liquid chromatography/inductively coupled plasma mass spectrometry system and its application to accurate quantification of DNA samples.

    Science.gov (United States)

    Takasaki, Yuka; Sakagawa, Shinnosuke; Inagaki, Kazumi; Fujii, Shin-Ichiro; Sabarudin, Akhmad; Umemura, Tomonari; Haraguchi, Hiroki

    2012-02-03

    Accurate quantification of DNA is highly important in various fields. Determination of phosphorus by ICP-MS is one of the most effective methods for accurate quantification of DNA due to the fixed stoichiometry of phosphate to this molecule. In this paper, a smart and reliable method for accurate quantification of DNA fragments and oligodeoxythymidilic acids by hyphenated HPLC/ICP-MS equipped with a highly efficient interface device is presented. The interface was constructed of a home-made capillary-attached micronebulizer and temperature-controllable cyclonic spray chamber (IsoMist). As a separation column for DNA samples, home-made methacrylate-based weak anion-exchange monolith was employed. Some parameters, which include composition of mobile phase, gradient program, inner and outer diameters of capillary, temperature of spray chamber etc., were optimized to find the best performance for separation and accurate quantification of DNA samples. The proposed system could achieve many advantages, such as total consumption for small amount sample analysis, salt-tolerance for hyphenated analysis, high accuracy and precision for quantitative analysis. Using this proposed system, the samples of 20 bp DNA ladder (20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 300, 400, 500 base pairs) and oligodeoxythymidilic acids (dT(12-18)) were rapidly separated and accurately quantified. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer

    Science.gov (United States)

    Pearson, T.; Giffard, P.; Beckstrom-Sternberg, S.; Auerbach, R.; Hornstra, H.; Tuanyok, A.; Price, E.P.; Glass, M.B.; Leadem, B.; Beckstrom-Sternberg, J. S.; Allan, G.J.; Foster, J.T.; Wagner, D.M.; Okinaka, R.T.; Sim, S.H.; Pearson, O.; Wu, Z.; Chang, J.; Kaul, R.; Hoffmaster, A.R.; Brettin, T.S.; Robison, R.A.; Mayo, M.; Gee, J.E.; Tan, P.; Currie, B.J.; Keim, P.

    2009-01-01

    Background: Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results: Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia. Conclusion: We describe an

  18. Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer

    Directory of Open Access Journals (Sweden)

    Kaul Rajinder

    2009-11-01

    Full Text Available Abstract Background Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia

  19. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    Science.gov (United States)

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  20. Combination of various data analysis techniques for efficient track reconstruction in very high multiplicity events

    Science.gov (United States)

    Siklér, Ferenc

    2017-08-01

    A novel combination of established data analysis techniques for reconstructing charged-particles in high energy collisions is proposed. It uses all information available in a collision event while keeping competing choices open as long as possible. Suitable track candidates are selected by transforming measured hits to a binned, three- or four-dimensional, track parameter space. It is accomplished by the use of templates taking advantage of the translational and rotational symmetries of the detectors. Track candidates and their corresponding hits, the nodes, form a usually highly connected network, a bipartite graph, where we allow for multiple hit to track assignments, edges. In order to get a manageable problem, the graph is cut into very many minigraphs by removing a few of its vulnerable components, edges and nodes. Finally the hits are distributed among the track candidates by exploring a deterministic decision tree. A depth-limited search is performed maximizing the number of hits on tracks, and minimizing the sum of track-fit χ2. Simplified but realistic models of LHC silicon trackers including the relevant physics processes are used to test and study the performance (efficiency, purity, timing) of the proposed method in the case of single or many simultaneous proton-proton collisions (high pileup), and for single heavy-ion collisions at the highest available energies.

  1. A distributed multi-GPU system for high speed electron microscopic tomographic reconstruction.

    Science.gov (United States)

    Zheng, Shawn Q; Branlund, Eric; Kesthelyi, Bettina; Braunfeld, Michael B; Cheng, Yifan; Sedat, John W; Agard, David A

    2011-07-01

    Full resolution electron microscopic tomographic (EMT) reconstruction of large-scale tilt series requires significant computing power. The desire to perform multiple cycles of iterative reconstruction and realignment dramatically increases the pressing need to improve reconstruction performance. This has motivated us to develop a distributed multi-GPU (graphics processing unit) system to provide the required computing power for rapid constrained, iterative reconstructions of very large three-dimensional (3D) volumes. The participating GPUs reconstruct segments of the volume in parallel, and subsequently, the segments are assembled to form the complete 3D volume. Owing to its power and versatility, the CUDA (NVIDIA, USA) platform was selected for GPU implementation of the EMT reconstruction. For a system containing 10 GPUs provided by 5 GTX295 cards, 10 cycles of SIRT reconstruction for a tomogram of 4096(2) × 512 voxels from an input tilt series containing 122 projection images of 4096(2) pixels (single precision float) takes a total of 1845 s of which 1032 s are for computation with the remainder being the system overhead. The same system takes only 39 s total to reconstruct 1024(2) × 256 voxels from 122 1024(2) pixel projections. While the system overhead is non-trivial, performance analysis indicates that adding extra GPUs to the system would lead to steadily enhanced overall performance. Therefore, this system can be easily expanded to generate superior computing power for very large tomographic reconstructions and especially to empower iterative cycles of reconstruction and realignment. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Flip-avoiding interpolating surface registration for skull reconstruction.

    Science.gov (United States)

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Accurate measurement of junctional conductance between electrically coupled cells with dual whole-cell voltage-clamp under conditions of high series resistance.

    Science.gov (United States)

    Hartveit, Espen; Veruki, Margaret Lin

    2010-03-15

    Accurate measurement of the junctional conductance (G(j)) between electrically coupled cells can provide important information about the functional properties of coupling. With the development of tight-seal, whole-cell recording, it became possible to use dual, single-electrode voltage-clamp recording from pairs of small cells to measure G(j). Experiments that require reduced perturbation of the intracellular environment can be performed with high-resistance pipettes or the perforated-patch technique, but an accompanying increase in series resistance (R(s)) compromises voltage-clamp control and reduces the accuracy of G(j) measurements. Here, we present a detailed analysis of methodologies available for accurate determination of steady-state G(j) and related parameters under conditions of high R(s), using continuous or discontinuous single-electrode voltage-clamp (CSEVC or DSEVC) amplifiers to quantify the parameters of different equivalent electrical circuit model cells. Both types of amplifiers can provide accurate measurements of G(j), with errors less than 5% for a wide range of R(s) and G(j) values. However, CSEVC amplifiers need to be combined with R(s)-compensation or mathematical correction for the effects of nonzero R(s) and finite membrane resistance (R(m)). R(s)-compensation is difficult for higher values of R(s) and leads to instability that can damage the recorded cells. Mathematical correction for R(s) and R(m) yields highly accurate results, but depends on accurate estimates of R(s) throughout an experiment. DSEVC amplifiers display very accurate measurements over a larger range of R(s) values than CSEVC amplifiers and have the advantage that knowledge of R(s) is unnecessary, suggesting that they are preferable for long-duration experiments and/or recordings with high R(s). Copyright (c) 2009 Elsevier B.V. All rights reserved.

  4. Characterization of a high resolution and high sensitivity pre-clinical PET scanner with 3D event reconstruction

    CERN Document Server

    Rissi, M; Bolle, E; Dorholt, O; Hines, K E; Rohne, O; Skretting, A; Stapnes, S; Volgyes, D

    2012-01-01

    COMPET is a preclinical PET scanner aiming towards a high sensitivity, a high resolution and MRI compatibility by implementing a novel detector geometry. In this approach, long scintillating LYSO crystals are used to absorb the gamma-rays. To determine the point of interaction (P01) between gamma-ray and crystal, the light exiting the crystals on one of the long sides is collected with wavelength shifters (WLS) perpendicularly arranged to the crystals. This concept has two main advantages: (1) The parallax error is reduced to a minimum and is equal for the whole field of view (FOV). (2) The P01 and its energy deposit is known in all three dimension with a high resolution, allowing for the reconstruction of Compton scattered gamma-rays. Point (1) leads to a uniform point source resolution (PSR) distribution over the whole FOV, and also allows to place the detector close to the object being imaged. Both points (1) and (2) lead to an increased sensitivity and allow for both high resolution and sensitivity at the...

  5. Radio reconstruction of the mass of ultra-high cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Dorosti, Qader [Institut fuer Kernphysik (IKP), KIT (Germany)

    2015-07-01

    Detection of ultra-high energy cosmic rays can reveal the processes of the most violent sources in the Universe, which yet has to be determined. Interaction of cosmic rays with the Earth's atmosphere results in cascades of secondary particles, i.e. air showers. Many of such particles are electrons and positrons. The induced electrons and positrons interact with the geomagnetic field and induce radio emissions. Detection of air showers along with the detection of induced radio emissions can furnish a precise measurement of the direction, energy and mass of ultra-high energy cosmic rays. The Auger Engineering Radio Array consists of 124 radio stations measuring radio emission from air showers, in order to reconstruct the energy, direction and mass of cosmic rays. In this contribution, we present a method which employs a reduced hyperbolic model to describe the shape of radio wave front. We have investigated that the parameters of the reduced hyperbolic model are sensitive to the mass of cosmic rays. The obtained results are presented in this talk.

  6. A new iterative reconstruction technique for attenuation correction in high-resolution positron emission tomography

    International Nuclear Information System (INIS)

    Knesaurek, K.; Machac, J.; Vallabhajosula, S.; Buchsbaum, M.S.

    1996-01-01

    A new interative reconstruction technique (NIRT) for positron emission computed tomography (PET), which uses transmission data for nonuniform attenuation correction, is described. Utilizing the general inverse problem theory, a cost functional which includes a noise term was derived. The cost functional was minimized using a weighted-least-square maximum a posteriori conjugate gradient (CG) method. The procedure involves a change in the Hessian of the cost function by adding an additional term. Two phantoms were used in a real data acquisition. The first was a cylinder phantom filled with uniformly distributed activity of 74 MBq of fluorine-18. Two different inserts were placed in the phantom. The second was a Hoffman brain phantom filled with uniformly distributed activity of 7.4 MBq of 18 F. Resulting reconstructed images were used to test and compare a new interative reconstruction technique with a standard filtered backprojection (FBP) method. The results confirmed that NIRT, based on the conjugate gradient method, converges rapidly and provides good reconstructed images. In comaprison with standard results obtained by the FBP method, the images reconstructed by NIRT showed better noise properties. The noise was measured as rms% noise and was less, by a factor of 1.75, in images reconstructed by NIRT than in the same images reconstructed by FBP. The distance between the Hoffman brain slice created from the MRI image was 0.526, while the same distance for the Hoffman brain slice reconstructed by NIRT was 0.328. The NIRT method suppressed the propagation of the noise without visible loss of resolution in the reconstructed PET images. (orig.)

  7. A stable and high-order accurate discontinuous Galerkin based splitting method for the incompressible Navier-Stokes equations

    Science.gov (United States)

    Piatkowski, Marian; Müthing, Steffen; Bastian, Peter

    2018-03-01

    In this paper we consider discontinuous Galerkin (DG) methods for the incompressible Navier-Stokes equations in the framework of projection methods. In particular we employ symmetric interior penalty DG methods within the second-order rotational incremental pressure correction scheme. The major focus of the paper is threefold: i) We propose a modified upwind scheme based on the Vijayasundaram numerical flux that has favourable properties in the context of DG. ii) We present a novel postprocessing technique in the Helmholtz projection step based on H (div) reconstruction of the pressure correction that is computed locally, is a projection in the discrete setting and ensures that the projected velocity satisfies the discrete continuity equation exactly. As a consequence it also provides local mass conservation of the projected velocity. iii) Numerical results demonstrate the properties of the scheme for different polynomial degrees applied to two-dimensional problems with known solution as well as large-scale three-dimensional problems. In particular we address second-order convergence in time of the splitting scheme as well as its long-time stability.

  8. Fast and accurate methods for the performance testing of highly-efficient c-Si photovoltaic modules using a 10 ms single-pulse solar simulator and customized voltage profiles

    International Nuclear Information System (INIS)

    Virtuani, A; Rigamonti, G; Friesen, G; Chianese, D; Beljean, P

    2012-01-01

    Performance testing of highly efficient, highly capacitive c-Si modules with pulsed solar simulators requires particular care. These devices in fact usually require a steady-state solar simulator or pulse durations longer than 100–200 ms in order to avoid measurement artifacts. The aim of this work was to validate an alternative method for the testing of highly capacitive c-Si modules using a 10 ms single pulse solar simulator. Our approach attempts to reconstruct a quasi-steady-state I–V (current–voltage) curve of a highly capacitive device during one single 10 ms flash by applying customized voltage profiles–-in place of a conventional V ramp—to the terminals of the device under test. The most promising results were obtained by using V profiles which we name ‘dragon-back’ (DB) profiles. When compared to the reference I–V measurement (obtained by using a multi-flash approach with approximately 20 flashes), the DB V profile method provides excellent results with differences in the estimation of P max (as well as of I sc , V oc and FF) below ±0.5%. For the testing of highly capacitive devices the method is accurate, fast (two flashes—possibly one—required), cost-effective and has proven its validity with several technologies making it particularly interesting for in-line testing. (paper)

  9. Accurate mass measurements of very short-lived nuclei. Prerequisites for high-accuracy investigations of superallowed β-decays

    International Nuclear Information System (INIS)

    Herfurth, F.; Kellerbauer, A.; Sauvan, E.; Ames, F.; Engels, O.; Audi, G.; Lunney, D.; Beck, D.; Blaum, K.; Kluge, H.J.; Scheidenberger, C.; Sikler, G.; Weber, C.; Bollen, G.; Schwarz, S.; Moore, R.B.; Oinonen, M.

    2002-01-01

    Mass measurements of 34 Ar, 73-78 Kr, and 74,76 Rb were performed with the Penning-trap mass spectrometer ISOLTRAP. Very accurate Q EC -values are needed for the investigations of the Ft-value of 0 + → 0 + nuclear β-decays used to test the standard model predictions for weak interactions. The necessary accuracy on the Q EC -value requires the mass of mother and daughter nuclei to be measured with δm/m ≤ 3 . 10 -8 . For most of the measured nuclides presented here this has been reached. The 34 Ar mass has been measured with a relative accuracy of 1.1 .10 -8 . The Q EC -value of the 34 Ar 0 + → 0 + decay can now be determined with an uncertainty of about 0.01%. Furthermore, 74 Rb is the shortest-lived nuclide ever investigated in a Penning trap. (orig.)

  10. Accurate dipole moment curve and non-adiabatic effects on the high resolution spectroscopic properties of the LiH molecule

    Science.gov (United States)

    Diniz, Leonardo G.; Kirnosov, Nikita; Alijah, Alexander; Mohallem, José R.; Adamowicz, Ludwik

    2016-04-01

    A very accurate dipole moment curve (DMC) for the ground X1Σ+ electronic state of the 7LiH molecule is reported. It is calculated with the use of all-particle explicitly correlated Gaussian functions with shifted centers. The DMC - the most accurate to our knowledge - and the corresponding highly accurate potential energy curve are used to calculate the transition energies, the transition dipole moments, and the Einstein coefficients for the rovibrational transitions with ΔJ = - 1 and Δv ⩽ 5 . The importance of the non-adiabatic effects in determining these properties is evaluated using the model of a vibrational R-dependent effective reduced mass in the rovibrational calculations introduced earlier (Diniz et al., 2015). The results of the present calculations are used to assess the quality of the two complete linelists of 7LiH available in the literature.

  11. High resolution electron exit wave reconstruction from a diffraction pattern using Gaussian basis decomposition

    International Nuclear Information System (INIS)

    Borisenko, Konstantin B; Kirkland, Angus I

    2014-01-01

    We describe an algorithm to reconstruct the electron exit wave of a weak-phase object from single diffraction pattern. The algorithm uses analytic formulations describing the diffraction intensities through a representation of the object exit wave in a Gaussian basis. The reconstruction is achieved by solving an overdetermined system of non-linear equations using an easily parallelisable global multi-start search with Levenberg-Marquard optimisation and analytic derivatives

  12. Bracing of the Reconstructed and Osteoarthritic Knee during High Dynamic Load Tasks.

    Science.gov (United States)

    Hart, Harvi F; Crossley, Kay M; Collins, Natalie J; Ackland, David C

    2017-06-01

    Lateral compartment osteoarthritis accompanied by abnormal knee biomechanics is frequently reported in individuals with knee osteoarthritis after anterior cruciate ligament reconstruction (ACLR). The aim of this study was to evaluate changes in knee biomechanics produced by an adjusted and unadjusted varus knee brace during high dynamic loading activities in individuals with lateral knee osteoarthritis after ACLR and valgus malalignment. Nineteen participants who had undergone ACLR 5 to 20 yr previously and had symptomatic and radiographic lateral knee osteoarthritis with valgus malalignment were assessed. Quantitative motion analysis experiments were conducted during hopping, stair ascent, and descent under three test conditions: (i) no brace, (ii) unadjusted brace with sagittal plane support and neutral frontal plane alignment, and (iii) adjusted brace with sagittal plane support and varus realignment (valgus to neutral). Sagittal, frontal, and transverse plane knee kinematics, external joint moment, and angular impulse data were calculated. Relative to an unbraced knee, braced conditions significantly increased knee flexion and adduction angles during hopping (P = 0.003 and P = 0.005; respectively), stair ascent (P = 0.003 and P stair ascent (P = 0.008) and flexion moments during stair descent (P = 0.006). There were no significant differences between the adjusted and the unadjusted brace conditions (P > 0.05). A knee brace, with or without varus alignment, can modulate knee kinematics and external joint moments during hopping, stairs ascent, and descent in individuals with predominant lateral knee osteoarthritis after ACLR. Longer-term use of a brace may have implications in slowing osteoarthritis progression.

  13. PET image reconstruction with rotationally symmetric polygonal pixel grid based highly compressible system matrix

    International Nuclear Information System (INIS)

    Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu

    2013-01-01

    To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)

  14. Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.

    Science.gov (United States)

    Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan

    2011-11-01

    Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).

  15. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    Science.gov (United States)

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM

  16. Accurate prediction of retention in hydrophilic interaction chromatography by back calculation of high pressure liquid chromatography gradient profiles.

    Science.gov (United States)

    Wang, Nu; Boswell, Paul G

    2017-10-20

    Gradient retention times are difficult to project from the underlying retention factor (k) vs. solvent composition (φ) relationships. A major reason for this difficulty is that gradients produced by HPLC pumps are imperfect - gradient delay, gradient dispersion, and solvent mis-proportioning are all difficult to account for in calculations. However, we recently showed that a gradient "back-calculation" methodology can measure these imperfections and take them into account. In RPLC, when the back-calculation methodology was used, error in projected gradient retention times is as low as could be expected based on repeatability in the k vs. φ relationships. HILIC, however, presents a new challenge: the selectivity of HILIC columns drift strongly over time. Retention is repeatable in short time, but selectivity frequently drifts over the course of weeks. In this study, we set out to understand if the issue of selectivity drift can be avoid by doing our experiments quickly, and if there any other factors that make it difficult to predict gradient retention times from isocratic k vs. φ relationships when gradient imperfections are taken into account with the back-calculation methodology. While in past reports, the accuracy of retention projections was >5%, the back-calculation methodology brought our error down to ∼1%. This result was 6-43 times more accurate than projections made using ideal gradients and 3-5 times more accurate than the same retention projections made using offset gradients (i.e., gradients that only took gradient delay into account). Still, the error remained higher in our HILIC projections than in RPLC. Based on the shape of the back-calculated gradients, we suspect the higher error is a result of prominent gradient distortion caused by strong, preferential water uptake from the mobile phase into the stationary phase during the gradient - a factor our model did not properly take into account. It appears that, at least with the stationary phase

  17. Vacuum-assisted closure downgrades reconstructive demands in high-risk patients with severe lower extremity injuries.

    Science.gov (United States)

    Kakagia, D; Karadimas, E; Drosos, G; Ververidis, A; Kazakos, D; Lazarides, M; Verettas, D

    2009-01-01

    Primary soft tissue reconstruction in complex leg injuries is mandatory in order to protect exposed tissues; however, it may be precluded by the patient's clinical status or by local wound conditions. This retrospective study aims to evaluate the use of negative pressure as an adjunct to delayed soft tissue reconstruction in patients with complex lower limb trauma. Forty-two patients with 49 complex lower limb injuries were treated with Vacuum assisted closure (VAC) 48 hours after bone fixation, vascular repair and surgical debridement. Wound swab cultures were obtained before and after every VAC application. Duration of therapy, wound flora, final reconstructive technique required, outcome and follow-up period were retrieved from medical records. Twenty-four male and eighteen female patients were recruited, with a mean age of 47 years. All were treated with VAC therapy for 15-42 days. Reconstruction was delayed due to the patients' critical condition, advanced age, medical co-morbidities, heavily exuding wounds and questionable viability of soft tissues. Patients were followed up for 90-895 days. Two wounds healed spontaneously, 6 were managed with delayed direct suture, 31 with split thickness skin grafts and 9 required local cutaneous, fasciocutaneous or muscular flaps. One patient died due to fat embolism. Wound bacterial flora progressively decreased in all but one patient. Scar formation was aesthetically acceptable by the patients while function depended on the initial injury. Negative pressure is a safe and effective adjunct to delayed soft tissue reconstruction in high-risk patients with severe lower extremity injuries, minimizing reconstructive requirements and therefore postoperative morbidity.

  18. Scalable implementations of accurate excited-state coupled cluster theories: application of high-level methods to porphyrin based systems

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, Karol; Krishnamoorthy, Sriram; Olson, Ryan M.; Tipparaju, Vinod; Apra, Edoardo

    2011-11-30

    The development of reliable tools for excited-state simulations is emerging as an extremely powerful computational chemistry tool for understanding complex processes in the broad class of light harvesting systems and optoelectronic devices. Over the last years we have been developing equation of motion coupled cluster (EOMCC) methods capable of tackling these problems. In this paper we discuss the parallel performance of EOMCC codes which provide accurate description of the excited-state correlation effects. Two aspects are discuss in details: (1) a new algorithm for the iterative EOMCC methods based on the novel task scheduling algorithms, and (2) parallel algorithms for the non-iterative methods describing the effect of triply excited configurations. We demonstrate that the most computationally intensive non-iterative part can take advantage of 210,000 cores of the Cray XT5 system at OLCF. In particular, we demonstrate the importance of non-iterative many-body methods for achieving experimental level of accuracy for several porphyrin-based system.

  19. A Modified ELISA Accurately Measures Secretion of High Molecular Weight Hyaluronan (HA) by Graves' Disease Orbital Cells

    Science.gov (United States)

    Krieger, Christine C.

    2014-01-01

    Excess production of hyaluronan (hyaluronic acid [HA]) in the retro-orbital space is a major component of Graves' ophthalmopathy, and regulation of HA production by orbital cells is a major research area. In most previous studies, HA was measured by ELISAs that used HA-binding proteins for detection and rooster comb HA as standards. We show that the binding efficiency of HA-binding protein in the ELISA is a function of HA polymer size. Using gel electrophoresis, we show that HA secreted from orbital cells is primarily comprised of polymers more than 500 000. We modified a commercially available ELISA by using 1 million molecular weight HA as standard to accurately measure HA of this size. We demonstrated that IL-1β-stimulated HA secretion is at least 2-fold greater than previously reported, and activation of the TSH receptor by an activating antibody M22 from a patient with Graves' disease led to more than 3-fold increase in HA production in both fibroblasts/preadipocytes and adipocytes. These effects were not consistently detected with the commercial ELISA using rooster comb HA as standard and suggest that fibroblasts/preadipocytes may play a more prominent role in HA remodeling in Graves' ophthalmopathy than previously appreciated. PMID:24302624

  20. A Novel Edge-Map Creation Approach for Highly Accurate Pupil Localization in Unconstrained Infrared Iris Images

    Directory of Open Access Journals (Sweden)

    Vineet Kumar

    2016-01-01

    Full Text Available Iris segmentation in the iris recognition systems is a challenging task under noncooperative environments. The iris segmentation is a process of detecting the pupil, iris’s outer boundary, and eyelids in the iris image. In this paper, we propose a pupil localization method for locating the pupils in the non-close-up and frontal-view iris images that are captured under near-infrared (NIR illuminations and contain the noise, such as specular and lighting reflection spots, eyeglasses, nonuniform illumination, low contrast, and occlusions by the eyelids, eyelashes, and eyebrow hair. In the proposed method, first, a novel edge-map is created from the iris image, which is based on combining the conventional thresholding and edge detection based segmentation techniques, and then, the general circular Hough transform (CHT is used to find the pupil circle parameters in the edge-map. Our main contribution in this research is a novel edge-map creation technique, which reduces the false edges drastically in the edge-map of the iris image and makes the pupil localization in the noisy NIR images more accurate, fast, robust, and simple. The proposed method was tested with three iris databases: CASIA-Iris-Thousand (version 4.0, CASIA-Iris-Lamp (version 3.0, and MMU (version 2.0. The average accuracy of the proposed method is 99.72% and average time cost per image is 0.727 sec.

  1. Statistical reconstruction for cone-beam CT with a post-artifact-correction noise model: application to high-quality head imaging

    International Nuclear Information System (INIS)

    Dang, H; Stayman, J W; Sisniega, A; Xu, J; Zbijewski, W; Siewerdsen, J H; Wang, X; Foos, D H; Aygun, N; Koliatsos, V E

    2015-01-01

    Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ∼40–80 HU, size  >  1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of

  2. Reconstruction of a high-resolution late holocene arctic paleoclimate record from Colville River delta sediments.

    Energy Technology Data Exchange (ETDEWEB)

    Schreiner, Kathryn Melissa; Lowry, Thomas Stephen

    2013-10-01

    This work was partially supported by the Sandia National Laboratories, Laboratory Directed Research and Development (LDRD) fellowship program in conjunction with Texas A&M University (TAMU). The research described herein is the work of Kathryn M. Schreiner (Katie) and her advisor, Thomas S. Bianchi and represents a concise description of Katies dissertation that was submitted to the TAMU Office of Graduate Studies in May 2013 in partial fulfillment of her doctorate of philosophy degree. High Arctic permafrost soils contain a massive amount of organic carbon, accounting for twice as much carbon as what is currently stored as carbon dioxide in the atmosphere. However, with current warming trends this sink is in danger of thawing and potentially releasing large amounts of carbon as both carbon dioxide and methane into the atmosphere. It is difficult to make predictions about the future of this sink without knowing how it has reacted to past temperature and climate changes. This project investigated long term, fine scale particulate organic carbon (POC) delivery by the high-Arctic Colville River into Simpsons Lagoon in the near-shore Beaufort Sea. Modern POC was determined to be a mixture of three sources (riverine soils, coastal erosion, and marine). Downcore POC measurements were performed in a core close to the Colville River output and a core close to intense coastal erosion. Inputs of the three major sources were found to vary throughout the last two millennia, and in the Colville River core covary significantly with Alaskan temperature reconstructions.

  3. A highly accurate predictive-adaptive method for lithium-ion battery remaining discharge energy prediction in electric vehicle applications

    International Nuclear Information System (INIS)

    Liu, Guangming; Ouyang, Minggao; Lu, Languang; Li, Jianqiu; Hua, Jianfeng

    2015-01-01

    importance of an accurate E RDE method in real-world applications

  4. Adult Reconstructive Surgery: A High-Risk Profession for Work-Related Injuries.

    Science.gov (United States)

    Alqahtani, Saad M; Alzahrani, Mohammad M; Tanzer, Michael

    2016-06-01

    Adult reconstructive surgery is an orthopedic subspecialty characterized by surgical tasks that are physical, repetitive, and require some degree of stamina from the surgeon. This can result strain and/or injury of the surgeon's musculoskeletal system. This study investigates the prevalence of work-related injuries among arthroplasty surgeons. A modified version of the physical discomfort survey was sent to surgeon members of the Hip Society, the International Hip Society, and the Canadian Orthopedic Arthroplasty via email. One hundred and eighty-three surgeons completed the survey. Overall, 66.1% of the arthroplasty surgeons reported that they had experienced a work-related injury. The most common injuries that occurred were low back pain (28%), lateral epicondylitis of the elbow (14%), shoulder tendonitis (14%), lumbar disc herniation (13%), and wrist arthritis (12%). Overall, 27% of surgeons took time off from work because of the injury. As the number of disorders diagnosed increased, there was a significant increase in the incidence of requiring time off work because of the disorder (P increased the risk of the surgeon requiring time off because of the disorder were age >55 years, practicing for more than >20 years, and performing >100 total hip arthroplasty procedures per year (P < .05). In addition, 31% of the orthopedic surgeons surveyed required surgery for their injury. Although most studies concentrate on the importance of patient safety and thus the quality of the health care system, the surgeon's safety is also considered an integral part of this system's quality. This study highlights a high prevalence of musculoskeletal work-related injuries among arthroplasty surgeons and indicates the need for the identification of preventive measures directed toward improving the operative surgical environment and work ergonomics for the surgeons. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Alkenone-based reconstructions reveal four-phase Holocene temperature evolution for High Arctic Svalbard

    Science.gov (United States)

    van der Bilt, Willem G. M.; D'Andrea, William J.; Bakke, Jostein; Balascio, Nicholas L.; Werner, Johannes P.; Gjerde, Marthe; Bradley, Raymond S.

    2018-03-01

    Situated at the crossroads of major oceanic and atmospheric circulation patterns, the Arctic is a key component of Earth's climate system. Compounded by sea-ice feedbacks, even modest shifts in the region's heat budget drive large climate responses. This is highlighted by the observed amplified response of the Arctic to global warming. Assessing the imprint and signature of underlying forcing mechanisms require paleoclimate records, allowing us to expand our knowledge beyond the short instrumental period and contextualize ongoing warming. However, such datasets are scarce and sparse in the Arctic, limiting our ability to address these issues. Here, we present two quantitative Holocene-length paleotemperature records from the High Arctic Svalbard archipelago, situated in the climatically sensitive Arctic North Atlantic. Temperature estimates are based on U37K unsaturation ratios from sediment cores of two lakes. Our data reveal a dynamic Holocene temperature evolution, with reconstructed summer lake water temperatures spanning a range of ∼6-8 °C, and characterized by four phases. The Early Holocene was marked by an early onset (∼10.5 ka cal. BP) of insolation-driven Hypsithermal conditions, likely compounded by strengthening oceanic heat transport. This warm interval was interrupted by cooling between ∼10.5-8.3 ka cal. BP that we attribute to cooling effects from the melting Northern Hemisphere ice sheets. Temperatures declined throughout the Middle Holocene, following a gradual trend that was accentuated by two cooling steps between ∼7.8-7 ka cal. BP and around ∼4.4-4.3 ka cal. BP. These transitions coincide with a strengthening influence of Arctic water and sea-ice in the adjacent Fram Strait. During the Late Holocene (past 4 ka), temperature change decoupled from the still-declining insolation, and fluctuated around comparatively cold mean conditions. By showing that Holocene Svalbard temperatures were governed by an alternation of forcings, this study

  6. Five hundred years of gridded high-resolution precipitation reconstructions over Europe and the connection to large-scale circulation

    Energy Technology Data Exchange (ETDEWEB)

    Pauling, Andreas [University of Bern, Institute of Geography, Bern (Switzerland); Luterbacher, Juerg; Wanner, Heinz [University of Bern, Institute of Geography, Bern (Switzerland); National Center of Competence in Research (NCCR) in Climate, Bern (Switzerland); Casty, Carlo [University of Bern, Climate and Environmental Physics Institute, Bern (Switzerland)

    2006-03-15

    We present seasonal precipitation reconstructions for European land areas (30 W to 40 E/30-71 N; given on a 0.5 x 0.5 resolved grid) covering the period 1500-1900 together with gridded reanalysis from 1901 to 2000 (Mitchell and Jones 2005). Principal component regression techniques were applied to develop this dataset. A large variety of long instrumental precipitation series, precipitation indices based on documentary evidence and natural proxies (tree-ring chronologies, ice cores, corals and a speleothem) that are sensitive to precipitation signals were used as predictors. Transfer functions were derived over the 1901-1983 calibration period and applied to 1500-1900 in order to reconstruct the large-scale precipitation fields over Europe. The performance (quality estimation based on unresolved variance within the calibration period) of the reconstructions varies over centuries, seasons and space. Highest reconstructive skill was found for winter over central Europe and the Iberian Peninsula. Precipitation variability over the last half millennium reveals both large interannual and decadal fluctuations. Applying running correlations, we found major non-stationarities in the relation between large-scale circulation and regional precipitation. For several periods during the last 500 years, we identified key atmospheric modes for southern Spain/northern Morocco and central Europe as representations of two precipitation regimes. Using scaled composite analysis, we show that precipitation extremes over central Europe and southern Spain are linked to distinct pressure patterns. Due to its high spatial and temporal resolution, this dataset allows detailed studies of regional precipitation variability for all seasons, impact studies on different time and space scales, comparisons with high-resolution climate models as well as analysis of connections with regional temperature reconstructions. (orig.)

  7. Improvements of the ALICE high level trigger for LHC Run 2 to facilitate online reconstruction, QA, and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rohr, David [Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    ALICE is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. Its main goal is the study of matter under extreme pressure and temperature as produced in heavy ion collisions at LHC. The ALICE High Level Trigger (HLT) is an online compute farm of around 200 nodes that performs a real time event reconstruction of the data delivered by the ALICE detectors. The HLT employs a fast FPGA based cluster finder algorithm as well as a GPU based track reconstruction algorithm and it is designed to process the maximum data rate expected from the ALICE detectors in real time. We present new features of the HLT for LHC Run 2 that started in 2015. A new fast standalone track reconstruction algorithm for the Inner Tracking System (ITS) enables the HLT to compute and report to LHC the luminous region of the interactions in real time. We employ a new dynamically reconfigurable histogram component that allows the visualization of characteristics of the online reconstruction using the full set of events measured by the detectors. This improves our monitoring and QA capabilities. During Run 2, we plan to deploy online calibration, starting with the calibration of the TPC (Time Projection Chamber) detector's drift time. First proof of concept tests were successfully performed using data-replay on our development cluster and during the heavy ion period at the end of 2015.

  8. High-Throughput Accurate Single-Cell Screening of Euglena gracilis with Fluorescence-Assisted Optofluidic Time-Stretch Microscopy.

    Directory of Open Access Journals (Sweden)

    Baoshan Guo

    Full Text Available The development of reliable, sustainable, and economical sources of alternative fuels is an important, but challenging goal for the world. As an alternative to liquid fossil fuels, algal biofuel is expected to play a key role in alleviating global warming since algae absorb atmospheric CO2 via photosynthesis. Among various algae for fuel production, Euglena gracilis is an attractive microalgal species as it is known to produce wax ester (good for biodiesel and aviation fuel within lipid droplets. To date, while there exist many techniques for inducing microalgal cells to produce and accumulate lipid with high efficiency, few analytical methods are available for characterizing a population of such lipid-accumulated microalgae including E. gracilis with high throughout, high accuracy, and single-cell resolution simultaneously. Here we demonstrate high-throughput, high-accuracy, single-cell screening of E. gracilis with fluorescence-assisted optofluidic time-stretch microscopy-a method that combines the strengths of microfluidic cell focusing, optical time-stretch microscopy, and fluorescence detection used in conventional flow cytometry. Specifically, our fluorescence-assisted optofluidic time-stretch microscope consists of an optical time-stretch microscope and a fluorescence analyzer on top of a hydrodynamically focusing microfluidic device and can detect fluorescence from every E. gracilis cell in a population and simultaneously obtain its image with a high throughput of 10,000 cells/s. With the multi-dimensional information acquired by the system, we classify nitrogen-sufficient (ordinary and nitrogen-deficient (lipid-accumulated E. gracilis cells with a low false positive rate of 1.0%. This method holds promise for evaluating cultivation techniques and selective breeding for microalgae-based biofuel production.

  9. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers.

    Science.gov (United States)

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-07-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.

  10. An accurate online calibration system based on combined clamp-shape coil for high voltage electronic current transformers

    International Nuclear Information System (INIS)

    Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi

    2013-01-01

    Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class

  11. Use of a Recursive-Rule eXtraction algorithm with J48graft to achieve highly accurate and concise rule extraction from a large breast cancer dataset

    Directory of Open Access Journals (Sweden)

    Yoichi Hayashi

    Full Text Available To assist physicians in the diagnosis of breast cancer and thereby improve survival, a highly accurate computer-aided diagnostic system is necessary. Although various machine learning and data mining approaches have been devised to increase diagnostic accuracy, most current methods are inadequate. The recently developed Recursive-Rule eXtraction (Re-RX algorithm provides a hierarchical, recursive consideration of discrete variables prior to analysis of continuous data, and can generate classification rules that have been trained on the basis of both discrete and continuous attributes. The objective of this study was to extract highly accurate, concise, and interpretable classification rules for diagnosis using the Re-RX algorithm with J48graft, a class for generating a grafted C4.5 decision tree. We used the Wisconsin Breast Cancer Dataset (WBCD. Nine research groups provided 10 kinds of highly accurate concrete classification rules for the WBCD. We compared the accuracy and characteristics of the rule set for the WBCD generated using the Re-RX algorithm with J48graft with five rule sets obtained using 10-fold cross validation (CV. We trained the WBCD using the Re-RX algorithm with J48graft and the average classification accuracies of 10 runs of 10-fold CV for the training and test datasets, the number of extracted rules, and the average number of antecedents for the WBCD. Compared with other rule extraction algorithms, the Re-RX algorithm with J48graft resulted in a lower average number of rules for diagnosing breast cancer, which is a substantial advantage. It also provided the lowest average number of antecedents per rule. These features are expected to greatly aid physicians in making accurate and concise diagnoses for patients with breast cancer. Keywords: Breast cancer diagnosis, Rule extraction, Re-RX algorithm, J48graft, C4.5

  12. A Dual-Mode Large-Arrayed CMOS ISFET Sensor for Accurate and High-Throughput pH Sensing in Biomedical Diagnosis.

    Science.gov (United States)

    Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping

    2015-09-01

    The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.

  13. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    NARCIS (Netherlands)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to

  14. Online detector response calculations for high-resolution PET image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Pratx, Guillem [Department of Radiation Oncology, Stanford University, Stanford, CA 94305 (United States); Levin, Craig, E-mail: cslevin@stanford.edu [Departments of Radiology, Physics and Electrical Engineering, and Molecular Imaging Program at Stanford, Stanford University, Stanford, CA 94305 (United States)

    2011-07-07

    Positron emission tomography systems are best described by a linear shift-varying model. However, image reconstruction often assumes simplified shift-invariant models to the detriment of image quality and quantitative accuracy. We investigated a shift-varying model of the geometrical system response based on an analytical formulation. The model was incorporated within a list-mode, fully 3D iterative reconstruction process in which the system response coefficients are calculated online on a graphics processing unit (GPU). The implementation requires less than 512 Mb of GPU memory and can process two million events per minute (forward and backprojection). For small detector volume elements, the analytical model compared well to reference calculations. Images reconstructed with the shift-varying model achieved higher quality and quantitative accuracy than those that used a simpler shift-invariant model. For an 8 mm sphere in a warm background, the contrast recovery was 95.8% for the shift-varying model versus 85.9% for the shift-invariant model. In addition, the spatial resolution was more uniform across the field-of-view: for an array of 1.75 mm hot spheres in air, the variation in reconstructed sphere size was 0.5 mm RMS for the shift-invariant model, compared to 0.07 mm RMS for the shift-varying model.

  15. A high-density SNP map for accurate mapping of seed fibre QTL in Brassica napus L.

    Directory of Open Access Journals (Sweden)

    Liezhao Liu

    Full Text Available A high density genetic linkage map for the complex allotetraploid crop species Brassica napus (oilseed rape was constructed in a late-generation recombinant inbred line (RIL population, using genome-wide single nucleotide polymorphism (SNP markers assayed by the Brassica 60 K Infinium BeadChip Array. The linkage map contains 9164 SNP markers covering 1832.9 cM. 1232 bins account for 7648 of the markers. A subset of 2795 SNP markers, with an average distance of 0.66 cM between adjacent markers, was applied for QTL mapping of seed colour and the cell wall fiber components acid detergent lignin (ADL, cellulose and hemicellulose. After phenotypic analyses across four different environments a total of 11 QTL were detected for seed colour and fiber traits. The high-density map considerably improved QTL resolution compared to the previous low-density maps. A previously identified major QTL with very high effects on seed colour and ADL was pinpointed to a narrow genome interval on chromosome A09, while a minor QTL explaining 8.1% to 14.1% of variation for ADL was detected on chromosome C05. Five and three QTL accounting for 4.7% to 21.9% and 7.3% to 16.9% of the phenotypic variation for cellulose and hemicellulose, respectively, were also detected. To our knowledge this is the first description of QTL for seed cellulose and hemicellulose in B. napus, representing interesting new targets for improving oil content. The high density SNP genetic map enables navigation from interesting B. napus QTL to Brassica genome sequences, giving useful new information for understanding the genetics of key seed quality traits in rapeseed.

  16. Label-Driven Learning Framework: Towards More Accurate Bayesian Network Classifiers through Discrimination of High-Confidence Labels

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2017-12-01

    Full Text Available Bayesian network classifiers (BNCs have demonstrated competitive classification accuracy in a variety of real-world applications. However, it is error-prone for BNCs to discriminate among high-confidence labels. To address this issue, we propose the label-driven learning framework, which incorporates instance-based learning and ensemble learning. For each testing instance, high-confidence labels are first selected by a generalist classifier, e.g., the tree-augmented naive Bayes (TAN classifier. Then, by focusing on these labels, conditional mutual information is redefined to more precisely measure mutual dependence between attributes, thus leading to a refined generalist with a more reasonable network structure. To enable finer discrimination, an expert classifier is tailored for each high-confidence label. Finally, the predictions of the refined generalist and the experts are aggregated. We extend TAN to LTAN (Label-driven TAN by applying the proposed framework. Extensive experimental results demonstrate that LTAN delivers superior classification accuracy to not only several state-of-the-art single-structure BNCs but also some established ensemble BNCs at the expense of reasonable computation overhead.

  17. An application of a relational database system for high-throughput prediction of elemental compositions from accurate mass values.

    Science.gov (United States)

    Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke

    2013-01-15

    High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.

  18. ExSTA: External Standard Addition Method for Accurate High-Throughput Quantitation in Targeted Proteomics Experiments.

    Science.gov (United States)

    Mohammed, Yassene; Pan, Jingxi; Zhang, Suping; Han, Jun; Borchers, Christoph H

    2018-03-01

    Targeted proteomics using MRM with stable-isotope-labeled internal-standard (SIS) peptides is the current method of choice for protein quantitation in complex biological matrices. Better quantitation can be achieved with the internal standard-addition method, where successive increments of synthesized natural form (NAT) of the endogenous analyte are added to each sample, a response curve is generated, and the endogenous concentration is determined at the x-intercept. Internal NAT-addition, however, requires multiple analyses of each sample, resulting in increased sample consumption and analysis time. To compare the following three methods, an MRM assay for 34 high-to-moderate abundance human plasma proteins is used: classical internal SIS-addition, internal NAT-addition, and external NAT-addition-generated in buffer using NAT and SIS peptides. Using endogenous-free chicken plasma, the accuracy is also evaluated. The internal NAT-addition outperforms the other two in precision and accuracy. However, the curves derived by internal vs. external NAT-addition differ by only ≈3.8% in slope, providing comparable accuracies and precision with good CV values. While the internal NAT-addition method may be "ideal", this new external NAT-addition can be used to determine the concentration of high-to-moderate abundance endogenous plasma proteins, providing a robust and cost-effective alternative for clinical analyses or other high-throughput applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. High-fidelity tissue engineering of patient-specific auricles for reconstruction of pediatric microtia and other auricular deformities.

    Directory of Open Access Journals (Sweden)

    Alyssa J Reiffel

    Full Text Available Autologous techniques for the reconstruction of pediatric microtia often result in suboptimal aesthetic outcomes and morbidity at the costal cartilage donor site. We therefore sought to combine digital photogrammetry with CAD/CAM techniques to develop collagen type I hydrogel scaffolds and their respective molds that would precisely mimic the normal anatomy of the patient-specific external ear as well as recapitulate the complex biomechanical properties of native auricular elastic cartilage while avoiding the morbidity of traditional autologous reconstructions.Three-dimensional structures of normal pediatric ears were digitized and converted to virtual solids for mold design. Image-based synthetic reconstructions of these ears were fabricated from collagen type I hydrogels. Half were seeded with bovine auricular chondrocytes. Cellular and acellular constructs were implanted subcutaneously in the dorsa of nude rats and harvested after 1 and 3 months.Gross inspection revealed that acellular implants had significantly decreased in size by 1 month. Cellular constructs retained their contour/projection from the animals' dorsa, even after 3 months. Post-harvest weight of cellular constructs was significantly greater than that of acellular constructs after 1 and 3 months. Safranin O-staining revealed that cellular constructs demonstrated evidence of a self-assembled perichondrial layer and copious neocartilage deposition. Verhoeff staining of 1 month cellular constructs revealed de novo elastic cartilage deposition, which was even more extensive and robust after 3 months. The equilibrium modulus and hydraulic permeability of cellular constructs were not significantly different from native bovine auricular cartilage after 3 months.We have developed high-fidelity, biocompatible, patient-specific tissue-engineered constructs for auricular reconstruction which largely mimic the native auricle both biomechanically and histologically, even after an extended

  20. Use of scanner characteristics in iterative image reconstruction for high-resolution positron emission tomography studies of small animals

    Energy Technology Data Exchange (ETDEWEB)

    Brix, G. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany); Doll, J. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany); Bellemann, M.E. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany); Trojan, H. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany); Haberkorn, U. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany); Schmidlin, P. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany); Ostertag, H. [Research Program ``Radiological Diagnostics and Therapy``, German Cancer Research Center (DKFZ), Heidelberg (Germany)

    1997-07-01

    The purpose of this work was to improve of the spatial resolution of a whole-body PET system for experimental studies of small animals by incorporation of scanner characteristics into the process of iterative image reconstruction. The image-forming characteristics of the PET camera were characterized by a spatially variant line-spread function (LSF), which was determined from 49 activated copper-64 line sources positioned over a field of view (FOV) of 21.0 cm. During the course of iterative image reconstruction, the forward projection of the estimated image was blurred with the LSF at each iteration step before the estimated projections were compared with the measured projections. Moreover, imaging studies of a rat and two nude mice were performed to evaluate the imaging properties of our approach in vivo. The spatial resolution of the scanner perpendicular to the direction of projection could be approximated by a one-dimensional Gaussian-shaped LSF with a full-width at half-maximum increasing from 6.5 mm at the centre to 6.7 mm at a radial distance of 10.5 cm. The incorporation of this blurring kernel into the iteration formula resulted in a significantly improved spatial resolution of about 3.9 mm over the examined FOV. As demonstrated by the phantom and the animal experiments, the high-resolution algorithm not only led to a better contrast resolution in the reconstructed emission scans but also improved the accuracy for quantitating activity concentrations in small tissue structures without leading to an amplification of image noise or image mottle. The presented data-handling strategy incorporates the image restoration step directly into the process of algebraic image reconstruction and obviates the need for ill-conditioned ``deconvolution`` procedures to be performed on the projections or on the reconstructed image. In our experience, the proposed algorithm is of special interest in experimental studies of small animals. (orig./AJ). With 9 figs.

  1. Use of scanner characteristics in iterative image reconstruction for high-resolution positron emission tomography studies of small animals

    International Nuclear Information System (INIS)

    Brix, G.; Doll, J.; Bellemann, M.E.; Trojan, H.; Haberkorn, U.; Schmidlin, P.; Ostertag, H.

    1997-01-01

    The purpose of this work was to improve of the spatial resolution of a whole-body PET system for experimental studies of small animals by incorporation of scanner characteristics into the process of iterative image reconstruction. The image-forming characteristics of the PET camera were characterized by a spatially variant line-spread function (LSF), which was determined from 49 activated copper-64 line sources positioned over a field of view (FOV) of 21.0 cm. During the course of iterative image reconstruction, the forward projection of the estimated image was blurred with the LSF at each iteration step before the estimated projections were compared with the measured projections. Moreover, imaging studies of a rat and two nude mice were performed to evaluate the imaging properties of our approach in vivo. The spatial resolution of the scanner perpendicular to the direction of projection could be approximated by a one-dimensional Gaussian-shaped LSF with a full-width at half-maximum increasing from 6.5 mm at the centre to 6.7 mm at a radial distance of 10.5 cm. The incorporation of this blurring kernel into the iteration formula resulted in a significantly improved spatial resolution of about 3.9 mm over the examined FOV. As demonstrated by the phantom and the animal experiments, the high-resolution algorithm not only led to a better contrast resolution in the reconstructed emission scans but also improved the accuracy for quantitating activity concentrations in small tissue structures without leading to an amplification of image noise or image mottle. The presented data-handling strategy incorporates the image restoration step directly into the process of algebraic image reconstruction and obviates the need for ill-conditioned ''deconvolution'' procedures to be performed on the projections or on the reconstructed image. In our experience, the proposed algorithm is of special interest in experimental studies of small animals. (orig./AJ). With 9 figs

  2. Calibration of reconstruction parameters in atom probe tomography using a single crystallographic orientation

    International Nuclear Information System (INIS)

    Suram, Santosh K.; Rajan, Krishna

    2013-01-01

    The purpose of this work is to develop a methodology to estimate the APT reconstruction parameters when limited crystallographic information is available. Reliable spatial scaling of APT data currently requires identification of multiple crystallographic poles from the field desorption image for estimating the reconstruction parameters. This requirement limits the capacity of accurately reconstructing APT data for certain complex systems, such as highly alloyed systems and nanostructured materials wherein more than one pole is usually not observed within one grain. To overcome this limitation, we develop a quantitative methodology for calibrating the reconstruction parameters in an APT dataset by ensuring accurate inter-planar spacing and optimizing the curvature correction for the atomic planes corresponding to a single crystallographic orientation. We validate our approach on an aluminum dataset and further illustrate its capabilities by computing geometric reconstruction parameters for W and Al–Mg–Sc datasets. - Highlights: ► Quantitative approach is developed to accurately reconstruct APT data. ► Curvature of atomic planes in APT data is used to calibrate the reconstruction. ► APT reconstruction parameters are determined from a single crystallographic axis. ► Quantitative approach is demonstrated on W, Al and Al–Mg–Sc systems. ► Accurate APT reconstruction of complex materials is now possible

  3. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    Science.gov (United States)

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  4. Accurate and precise 40Ar/39Ar dating by high-resolution, multi-collection, mass spectrometry

    DEFF Research Database (Denmark)

    Storey, Michael; Rivera, Tiffany; Flude, Stephanie

    New generation, high resolution, multi-collector noble gas mass spectrometers equipped with ion-counting electron multipliers provide opportunities for improved accuracy and precision in 40Ar/39Ar dating. Here we report analytical protocols and age cross-calibration studies using a NU-Instruments......New generation, high resolution, multi-collector noble gas mass spectrometers equipped with ion-counting electron multipliers provide opportunities for improved accuracy and precision in 40Ar/39Ar dating. Here we report analytical protocols and age cross-calibration studies using a NU......-Instruments multi-collector Noblesse noble gas mass spectrometer configured with a faraday detector and three ion-counting electron multipliers. The instrument has the capability to measure several noble gas isotopes simultaneously and to change measurement configurations instantaneously by the use of QUAD lenses...... (zoom optics). The Noblesse offer several advantages over previous generation noble gas mass spectrometers and is particularly suited for single crystal 40Ar/39Ar dating because of: (i) improved source sensitivity (ii) ion-counting electron multipliers, which have much lower signal to noise ratios than...

  5. The Effects of High-Intensity versus Low-Intensity Resistance Training on Leg Extensor Power and Recovery of Knee Function after ACL-Reconstruction

    DEFF Research Database (Denmark)

    Bieler, Theresa; Sobol, Nanna Aue; Andersen, Lars L

    2014-01-01

    OBJECTIVE: Persistent weakness is a common problem after anterior cruciate ligament- (ACL-) reconstruction. This study investigated the effects of high-intensity (HRT) versus low-intensity (LRT) resistance training on leg extensor power and recovery of knee function after ACL-reconstruction. METH...

  6. Encoding negative events under stress: high subjective arousal is related to accurate emotional memory despite misinformation exposure.

    Science.gov (United States)

    Hoscheidt, Siobhan M; LaBar, Kevin S; Ryan, Lee; Jacobs, W Jake; Nadel, Lynn

    2014-07-01

    Stress at encoding affects memory processes, typically enhancing, or preserving, memory for emotional information. These effects have interesting implications for eyewitness accounts, which in real-world contexts typically involve encoding an aversive event under stressful conditions followed by potential exposure to misinformation. The present study investigated memory for a negative event encoded under stress and subsequent misinformation endorsement. Healthy young adults participated in a between-groups design with three experimental sessions conducted 48 h apart. Session one consisted of a psychosocial stress induction (or control task) followed by incidental encoding of a negative slideshow. During session two, participants were asked questions about the slideshow, during which a random subgroup was exposed to misinformation. Memory for the slideshow was tested during the third session. Assessment of memory accuracy across stress and no-stress groups revealed that stress induced just prior to encoding led to significantly better memory for the slideshow overall. The classic misinformation effect was also observed - participants exposed to misinformation were significantly more likely to endorse false information during memory testing. In the stress group, however, memory accuracy and misinformation effects were moderated by arousal experienced during encoding of the negative event. Misinformed-stress group participants who reported that the negative slideshow elicited high arousal during encoding were less likely to endorse misinformation for the most aversive phase of the story. Furthermore, these individuals showed better memory for components of the aversive slideshow phase that had been directly misinformed. Results from the current study provide evidence that stress and high subjective arousal elicited by a negative event act concomitantly during encoding to enhance emotional memory such that the most aversive aspects of the event are well remembered and

  7. An accurate calibration method for high pressure vibrating tube densimeters in the density interval (700 to 1600) kg . m-3

    International Nuclear Information System (INIS)

    Sanmamed, Yolanda A.; Dopazo-Paz, Ana; Gonzalez-Salgado, Diego; Troncoso, Jacobo; Romani, Luis

    2009-01-01

    A calibration procedure of vibrating tube densimeters for density measurement of liquids in the intervals (700 to 1600) kg . m -3 , (283.15 to 323.15) K, and (0.1 to 60) MPa is presented. It is based on the modelization of the vibrating tube as a thick-tube clamped at one end (cantilever) whose stress and thermal behaviour follows the ideas proposed in the Forced Path Mechanical Calibration model (FPMC). Model parameters are determined using two calibration fluids with densities certified at atmospheric pressure (dodecane and tetracholoroethylene) and a third one with densities known as a function of pressure (water). It is applied to the Anton Paar 512P densimeter, obtaining density measurements with an expanded uncertainty less than 0.2 kg . m -3 in the working intervals. This accuracy comes from the combination of several factors: densimeter behaves linearly in the working density interval, densities of both calibration fluids cover that interval and they have a very low uncertainty, and the mechanical behaviour of the tube is well characterized by the considered model. The main application of this method is the precise measurement of high density fluids for which most of the calibration procedures are inaccurate.

  8. Albuminuria and neck circumference are determinate factors of successful accurate estimation of glomerular filtration rate in high cardiovascular risk patients.

    Directory of Open Access Journals (Sweden)

    Po-Jen Hsiao

    Full Text Available Estimated glomerular filtration rate (eGFR is used for diagnosis of chronic kidney disease (CKD. The eGFR models based on serum creatinine or cystatin C are used more in clinical practice. Albuminuria and neck circumference are associated with CKD and may have correlations with eGFR.We explored the correlations and modelling formulates among various indicators such as serum creatinine, cystatin C, albuminuria, and neck circumference for eGFR.Cross-sectional study.We reviewed the records of patients with high cardiovascular risk from 2010 to 2011 in Taiwan. 24-hour urine creatinine clearance was used as the standard. We utilized a decision tree to select for variables and adopted a stepwise regression method to generate five models. Model 1 was based on only serum creatinine and was adjusted for age and gender. Model 2 added serum cystatin C, models 3 and 4 added albuminuria and neck circumference, respectively. Model 5 simultaneously added both albuminuria and neck circumference.Total 177 patients were recruited in this study. In model 1, the bias was 2.01 and its precision was 14.04. In model 2, the bias was reduced to 1.86 with a precision of 13.48. The bias of model 3 was 1.49 with a precision of 12.89, and the bias for model 4 was 1.74 with a precision of 12.97. In model 5, the bias could be lower to 1.40 with a precision of 12.53.In this study, the predicting ability of eGFR was improved after the addition of serum cystatin C compared to serum creatinine alone. The bias was more significantly reduced by the calculation of albuminuria. Furthermore, the model generated by combined albuminuria and neck circumference could provide the best eGFR predictions among these five eGFR models. Neck circumference can be investigated potentially in the further studies.

  9. Highly accurate nuclear and electronic stopping cross sections derived using Monte Carlo simulations to reproduce measured range data

    Science.gov (United States)

    Wittmaack, Klaus; Mutzke, Andreas

    2017-03-01

    We have examined and confirmed the previously unexplored concept of using Monte Carlo calculations in combination with measured projected ranges of ions implanted in solids to derive a quantitative description of nuclear interaction and electronic stopping. The study involved 98 ranges of 11B in Si between 1 keV and 8 MeV, contained in 12 sets of 10 different groups. Systematic errors by up to ±8% were removed to establish a refined data base with 93 ranges featuring only statistical uncertainties (±1.8%). The Monte Carlo calculations could be set up to reproduce the refined ranges with a mean ratio 1.002 ± 1.7%. The input parameters required for this very high level of agreement are as follows. Nuclear interaction is best described by the Kr-C potential, but in obligatory combination with the Lindhard-Scharff (LS) screening length. Up to 300 keV, the electronic stopping cross section is proportional to the projectile velocity, Se = kSe,LS, with k = 1.46 ± 0.01. At higher energies, Se falls progressively short of kSe,LS. Around the Bragg peak, i.e., between 0.8 and 10 MeV, Se is modeled by an adjustable function serving to tailor the peak shape properly. Calculated and measured isotope effects for ranges of 10B and 11B in Si agree within the experimental uncertainty (±0.25%). The range-based Se,R(E) reported here predicts the scarce experimental data derived from the energy loss in projectile transmission through thin Si foils to within 2% or better. By contrast, Se(E) data of available stopping power tables exhibit deviations from Se,R(E) between -40% and +14%.

  10. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  11. Scanning SRXF analysis and isotopes of uranium series from bottom sediments of Siberian lakes for high-resolution climate reconstructions

    International Nuclear Information System (INIS)

    Goldberg, E.L.; Grachev, M.A.; Chebykin, E.P.; Phedorin, M.A.; Kalugin, I.A.; Khlystov, O.M.; Zolotarev, K.V.

    2005-01-01

    High-resolution scanning X-ray Fluorescence Analysis with Synchrotron Radiation (SRXFA) was applied to investigate the downcore distribution of elements in Lake Baikal and Lake Teletskoye. Physical modeling of river runoff taking into account the chemistry of U series isotopes and their concentrations in sediments allowed a decade-scale reconstruction of Holocene (0-11 ky) river input to Lake Baikal. Holocene moisture peaks in East Siberia are synchronous with abrupt spells in the Atlantic. The multi-element data from Lake Teletskoye were used to predict the function of geochemical response to climate change in plainland Altai and to reconstruct the trends of annual (winter) air temperatures and atmospheric precipitation for the past 500 years

  12. CrossRef Energy Reconstruction in a High Granularity Semi-Digital Hadronic Calorimeter for ILC Experiments

    CERN Document Server

    Mannai, S; Cortina, E; Laktineh, I

    2016-01-01

    Abstract: The Semi-Digital Hadronic CALorimeter (SDHCAL) is one of the two hadronic calorimeter options proposed by the International Large Detector (ILD) project for the future International Linear Collider (ILC) experiments. It is a sampling calorimeter with 48 active layers made of Glass Resistive Plate Chambers (GRPCs) and their embedded electronics. A fine lateral segmentation is obtained thanks to pickup pads of 1 cm2. This ensures the high granularity required for the application of the Particle Flow Algorithm (PFA) in order to improve the jet energy resolution in the ILC experiments. The performance of the SDHCAL technological prototype was tested successfully in several beam tests at CERN. The main point to be discussed here concerns the energy reconstruction in SDHCAL. Based on Monte Carlo simulation of the SDHCAL prototype using the GEANT4 package, we present different energy reconstruction methods to study the energy linearity and resolution of the detector response to single hadrons. In particula...

  13. Reconstruction of semileptonically decaying beauty hadrons produced in high energy pp collisions

    Energy Technology Data Exchange (ETDEWEB)

    Ciezarek, G. [Nikhef,Science Park 105, 1098 XG Amsterdam (Netherlands); Lupato, A. [INFN Padova,Viale dell’Università, 2, 35020 Legnaro PD (Italy); Rotondo, M. [Laboratori Nazionali di Frascati,Via Enrico Fermi, 40, 00044 Frascati RM (Italy); Vesterinen, M. [Physicalisches Institute Heidelberg,Klaus-Tschira-Gebäude, Im Neuenheimer Feld 226, 69120 Heidelberg (Germany)

    2017-02-06

    It is well known that in b-hadron decays with a single unreconstructible final state particle, the decay kinematics can be solved up to a quadratic ambiguity, without any knowledge of the b-hadron momentum. We present a method to infer the momenta of b-hadrons produced in hadron collider experiments using information from their reconstructed flight vectors. Our method is strictly agnostic to the decay itself, which implies that it can be validated with control samples of topologically similar decays to fully reconstructible final states. A multivariate regression algorithm based on the flight information provides a b-hadron momentum estimate with a resolution of around 60% which is sufficient to select the correct solution to the quadratic equation in around 70% of cases. This will improve the ability of hadron collider experiments to make differential decay rate measurements with semileptonic b-hadron decays.

  14. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  15. Fermi-surface reconstruction and the origin of high-temperature superconductivity

    International Nuclear Information System (INIS)

    Norman, M.R.

    2010-01-01

    lattice into a d 9 configuration, with one localized hole in the 3d shell per copper site. Given the localized nature of this state, it was questioned whether a momentum-space picture was an appropriate description of the physics of the cuprates. In fact, this question relates to a long-standing debate in the physics community: Since the parent state is also an antiferromagnet, one can, in principle, map the Mott insulator to a band insulator with magnetic order. In this 'Slater' picture, Mott physics is less relevant than the magnetism itself. It is therefore unclear which of the two, magnetism or Mott physics, is more fundamentally tied to superconductivity in the cuprates. After twenty years of effort, definitive quantum oscillations that could be used to map the Fermi surface were finally observed in a high-temperature cuprate superconductor in 2007. This and subsequent studies reveal a profound rearrangement of the Fermi surface in underdoped cuprates. The cause of the reconstruction, and its implication for the origin of high-temperature superconductivity, is a subject of active debate.

  16. Identification and accurate quantification of structurally related peptide impurities in synthetic human C-peptide by liquid chromatography-high resolution mass spectrometry.

    Science.gov (United States)

    Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei

    2018-06-04

    Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.

  17. Reconstructing Global-scale Ionospheric Outflow With a Satellite Constellation

    Science.gov (United States)

    Liemohn, M. W.; Welling, D. T.; Jahn, J. M.; Valek, P. W.; Elliott, H. A.; Ilie, R.; Khazanov, G. V.; Glocer, A.; Ganushkina, N. Y.; Zou, S.

    2017-12-01

    The question of how many satellites it would take to accurately map the spatial distribution of ionospheric outflow is addressed in this study. Given an outflow spatial map, this image is then reconstructed from a limited number virtual satellite pass extractions from the original values. An assessment is conducted of the goodness of fit as a function of number of satellites in the reconstruction, placement of the satellite trajectories relative to the polar cap and auroral oval, season and universal time (i.e., dipole tilt relative to the Sun), geomagnetic activity level, and interpolation technique. It is found that the accuracy of the reconstructions increases sharply from one to a few satellites, but then improves only marginally with additional spacecraft beyond 4. Increased dwell time of the satellite trajectories in the auroral zone improves the reconstruction, therefore a high-but-not-exactly-polar orbit is most effective for this task. Local time coverage is also an important factor, shifting the auroral zone to different locations relative to the virtual satellite orbit paths. The expansion and contraction of the polar cap and auroral zone with geomagnetic activity influences the coverage of the key outflow regions, with different optimal orbit configurations for each level of activity. Finally, it is found that reconstructing each magnetic latitude band individually produces a better fit to the original image than 2-D image reconstruction method (e.g., triangulation). A high-latitude, high-altitude constellation mission concept is presented that achieves acceptably accurate outflow reconstructions.

  18. Accurate CpG and non-CpG cytosine methylation analysis by high-throughput locus-specific pyrosequencing in plants.

    Science.gov (United States)

    How-Kit, Alexandre; Daunay, Antoine; Mazaleyrat, Nicolas; Busato, Florence; Daviaud, Christian; Teyssier, Emeline; Deleuze, Jean-François; Gallusci, Philippe; Tost, Jörg

    2015-07-01

    Pyrosequencing permits accurate quantification of DNA methylation of specific regions where the proportions of the C/T polymorphism induced by sodium bisulfite treatment of DNA reflects the DNA methylation level. The commercially available high-throughput locus-specific pyrosequencing instruments allow for the simultaneous analysis of 96 samples, but restrict the DNA methylation analysis to CpG dinucleotide sites, which can be limiting in many biological systems. In contrast to mammals where DNA methylation occurs nearly exclusively on CpG dinucleotides, plants genomes harbor DNA methylation also in other sequence contexts including CHG and CHH motives, which cannot be evaluated by these pyrosequencing instruments due to software limitations. Here, we present a complete pipeline for accurate CpG and non-CpG cytosine methylation analysis at single base-resolution using high-throughput locus-specific pyrosequencing. The devised approach includes the design and validation of PCR amplification on bisulfite-treated DNA and pyrosequencing assays as well as the quantification of the methylation level at every cytosine from the raw peak intensities of the Pyrograms by two newly developed Visual Basic Applications. Our method presents accurate and reproducible results as exemplified by the cytosine methylation analysis of the promoter regions of two Tomato genes (NOR and CNR) encoding transcription regulators of fruit ripening during different stages of fruit development. Our results confirmed a significant and temporally coordinated loss of DNA methylation on specific cytosines during the early stages of fruit development in both promoters as previously shown by WGBS. The manuscript describes thus the first high-throughput locus-specific DNA methylation analysis in plants using pyrosequencing.

  19. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  20. Highly accurate bound state calculations of the two-center molecular ions by using the universal variational expansion for three-body systems

    Science.gov (United States)

    Frolov, Alexei M.

    2018-03-01

    The universal variational expansion for the non-relativistic three-body systems is explicitly constructed. This universal expansion can be used to perform highly accurate numerical computations of the bound state spectra in various three-body systems, including Coulomb three-body systems with arbitrary particle masses and electric charges. Our main interest is related to the adiabatic three-body systems which contain one bound electron and two heavy nuclei of hydrogen isotopes: the protium p, deuterium d and tritium t. We also consider the analogous (model) hydrogen ion ∞H2+ with the two infinitely heavy nuclei.

  1. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry

    DEFF Research Database (Denmark)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe

    2018-01-01

    artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model......Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect...

  2. NEW FERMI-LAT EVENT RECONSTRUCTION REVEALS MORE HIGH-ENERGY GAMMA RAYS FROM GAMMA-RAY BURSTS

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, W. B. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Baldini, L. [Universita di Pisa and Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bregeon, J.; Pesce-Rollins, M.; Sgro, C.; Tinivella, M. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bruel, P. [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS/IN2P3, Palaiseau (France); Chekhtman, A. [Center for Earth Observing and Space Research, College of Science, George Mason University, Fairfax, VA 22030 (United States); Cohen-Tanugi, J. [Laboratoire Univers et Particules de Montpellier, Universite Montpellier 2, CNRS/IN2P3, F-34095 Montpellier (France); Drlica-Wagner, A.; Omodei, N.; Rochester, L. S.; Usher, T. L. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Granot, J. [Department of Natural Sciences, The Open University of Israel, 1 University Road, P.O. Box 808, Ra' anana 43537 (Israel); Longo, F. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Razzaque, S. [Department of Physics, University of Johannesburg, Auckland Park 2006 (South Africa); Zimmer, S., E-mail: melissa.pesce.rollins@pi.infn.it, E-mail: nicola.omodei@stanford.edu, E-mail: granot@openu.ac.il [Department of Physics, Stockholm University, AlbaNova, SE-106 91 Stockholm (Sweden)

    2013-09-01

    Based on the experience gained during the four and a half years of the mission, the Fermi-LAT Collaboration has undertaken a comprehensive revision of the event-level analysis going under the name of Pass 8. Although it is not yet finalized, we can test the improvements in the new event reconstruction with the special case of the prompt phase of bright gamma-ray bursts (GRBs), where the signal-to-noise ratio is large enough that loose selection cuts are sufficient to identify gamma rays associated with the source. Using the new event reconstruction, we have re-analyzed 10 GRBs previously detected by the Large Area Telescope (LAT) for which an X-ray/optical follow-up was possible and found four new gamma rays with energies greater than 10 GeV in addition to the seven previously known. Among these four is a 27.4 GeV gamma ray from GRB 080916C, which has a redshift of 4.35, thus making it the gamma ray with the highest intrinsic energy ({approx}147 GeV) detected from a GRB. We present here the salient aspects of the new event reconstruction and discuss the scientific implications of these new high-energy gamma rays, such as constraining extragalactic background light models, Lorentz invariance violation tests, the prompt emission mechanism, and the bulk Lorentz factor of the emitting region.

  3. Three-Dimensional Reconstruction of Cloud-to-Ground Lightning Using High-Speed Video and VHF Broadband Interferometer

    Science.gov (United States)

    Li, Yun; Qiu, Shi; Shi, Lihua; Huang, Zhengyu; Wang, Tao; Duan, Yantao

    2017-12-01

    The time resolved three-dimensional (3-D) spatial reconstruction of lightning channels using high-speed video (HSV) images and VHF broadband interferometer (BITF) data is first presented in this paper. Because VHF and optical radiations in step formation process occur with time separation no more than 1 μs, the observation data of BITF and HSV at two different sites provide the possibility of reconstructing the time resolved 3-D channel of lightning. With the proposed procedures for 3-D reconstruction of leader channels, dart leaders as well as stepped leaders with complex multiple branches can be well reconstructed. The differences between 2-D speeds and 3-D speeds of leader channels are analyzed by comparing the development of leader channels in 2-D and 3-D space. Since return stroke (RS) usually follows the path of previous leader channels, the 3-D speeds of the return strokes are first estimated by combination with the 3-D structure of the preceding leaders and HSV image sequences. For the fourth RS, the ratios of the 3-D to 2-D RS speeds increase with height, and the largest ratio of the 3-D to 2-D return stroke speeds can reach 2.03, which is larger than the result of triggered lightning reported by Idone. Since BITF can detect lightning radiation in a 360° view, correlated BITF and HSV observations increase the 3-D detection probability than dual-station HSV observations, which is helpful to obtain more events and deeper understanding of the lightning process.

  4. A Fiji multi-coral δ18O composite approach to obtaining a more accurate reconstruction of the last two-centuries of the ocean-climate variability in the South Pacific Convergence Zone region

    Science.gov (United States)

    Dassié, Emilie P.; Linsley, Braddock K.; Corrège, Thierry; Wu, Henry C.; Lemley, Gavin M.; Howe, Steve; Cabioch, Guy

    2014-12-01

    The limited availability of oceanographic data in the tropical Pacific Ocean prior to the satellite era makes coral-based climate reconstructions a key tool for extending the instrumental record back in time, thereby providing a much needed test for climate models and projections. We have generated a unique regional network consisting of five Porites coral δ18O time series from different locations in the Fijian archipelago. Our results indicate that using a minimum of three Porites coral δ18O records from Fiji is statistically sufficient to obtain a reliable signal for climate reconstruction, and that application of an approach used in tree ring studies is a suitable tool to determine this number. The coral δ18O composite indicates that while sea surface temperature (SST) variability is the primary driver of seasonal δ18O variability in these Fiji corals, annual average coral δ18O is more closely correlated to sea surface salinity (SSS) as previously reported. Our results highlight the importance of water mass advection in controlling Fiji coral δ18O and salinity variability at interannual and decadal time scales despite being located in the heavy rainfall region of the South Pacific Convergence Zone (SPCZ). The Fiji δ18O composite presents a secular freshening and warming trend since the 1850s coupled with changes in both interannual (IA) and decadal/interdecadal (D/I) variance. The changes in IA and D/I variance suggest a re-organization of climatic variability in the SPCZ region beginning in the late 1800s to period of a more dominant interannual variability, which could correspond to a southeast expansion of the SPCZ.

  5. Real-time cardiovascular magnetic resonance at high temporal resolution: radial FLASH with nonlinear inverse reconstruction

    Directory of Open Access Journals (Sweden)

    Merboldt Klaus-Dietmar

    2010-07-01

    Full Text Available Abstract Background Functional assessments of the heart by dynamic cardiovascular magnetic resonance (CMR commonly rely on (i electrocardiographic (ECG gating yielding pseudo real-time cine representations, (ii balanced gradient-echo sequences referred to as steady-state free precession (SSFP, and (iii breath holding or respiratory gating. Problems may therefore be due to the need for a robust ECG signal, the occurrence of arrhythmia and beat to beat variations, technical instabilities (e.g., SSFP "banding" artefacts, and limited patient compliance and comfort. Here we describe a new approach providing true real-time CMR with image acquisition times as short as 20 to 30 ms or rates of 30 to 50 frames per second. Methods The approach relies on a previously developed real-time MR method, which combines a strongly undersampled radial FLASH CMR sequence with image reconstruction by regularized nonlinear inversion. While iterative reconstructions are currently performed offline due to limited computer speed, online monitoring during scanning is accomplished using gridding reconstructions with a sliding window at the same frame rate but with lower image quality. Results Scans of healthy young subjects were performed at 3 T without ECG gating and during free breathing. The resulting images yield T1 contrast (depending on flip angle with an opposed-phase or in-phase condition for water and fat signals (depending on echo time. They completely avoid (i susceptibility-induced artefacts due to the very short echo times, (ii radiofrequency power limitations due to excitations with flip angles of 10° or less, and (iii the risk of peripheral nerve stimulation due to the use of normal gradient switching modes. For a section thickness of 8 mm, real-time images offer a spatial resolution and total acquisition time of 1.5 mm at 30 ms and 2.0 mm at 22 ms, respectively. Conclusions Though awaiting thorough clinical evaluation, this work describes a robust and

  6. Real-time cardiovascular magnetic resonance at high temporal resolution: radial FLASH with nonlinear inverse reconstruction.

    Science.gov (United States)

    Zhang, Shuo; Uecker, Martin; Voit, Dirk; Merboldt, Klaus-Dietmar; Frahm, Jens

    2010-07-08

    Functional assessments of the heart by dynamic cardiovascular magnetic resonance (CMR) commonly rely on (i) electrocardiographic (ECG) gating yielding pseudo real-time cine representations, (ii) balanced gradient-echo sequences referred to as steady-state free precession (SSFP), and (iii) breath holding or respiratory gating. Problems may therefore be due to the need for a robust ECG signal, the occurrence of arrhythmia and beat to beat variations, technical instabilities (e.g., SSFP "banding" artefacts), and limited patient compliance and comfort. Here we describe a new approach providing true real-time CMR with image acquisition times as short as 20 to 30 ms or rates of 30 to 50 frames per second. The approach relies on a previously developed real-time MR method, which combines a strongly undersampled radial FLASH CMR sequence with image reconstruction by regularized nonlinear inversion. While iterative reconstructions are currently performed offline due to limited computer speed, online monitoring during scanning is accomplished using gridding reconstructions with a sliding window at the same frame rate but with lower image quality. Scans of healthy young subjects were performed at 3 T without ECG gating and during free breathing. The resulting images yield T1 contrast (depending on flip angle) with an opposed-phase or in-phase condition for water and fat signals (depending on echo time). They completely avoid (i) susceptibility-induced artefacts due to the very short echo times, (ii) radiofrequency power limitations due to excitations with flip angles of 10 degrees or less, and (iii) the risk of peripheral nerve stimulation due to the use of normal gradient switching modes. For a section thickness of 8 mm, real-time images offer a spatial resolution and total acquisition time of 1.5 mm at 30 ms and 2.0 mm at 22 ms, respectively. Though awaiting thorough clinical evaluation, this work describes a robust and flexible acquisition and reconstruction technique for

  7. Method for high precision reconstruction of air shower Xmax using two-dimensional radio intensity profiles

    NARCIS (Netherlands)

    Buitink, S.; Corstanje, A.; Enriquez, J. E.; Falcke, H.; Hörandel, J. R.; Huege, T.; Nelles, A.; Rachen, J. P.; Schellart, P.; Scholten, O.; ter Veen, S.; Thoudam, S.; Trinh, Gia

    2014-01-01

    The mass composition of cosmic rays contains important clues about their origin. Accurate measurements are needed to resolve longstanding issues such as the transition from Galactic to extra-Galactic origin and the nature of the cutoff observed at the highest energies. Composition can be studied by

  8. Accurate evaluation of subband structure in a carrier accumulation layer at an n-type InAs surface: LDF calculation combined with high-resolution photoelectron spectroscopy

    Directory of Open Access Journals (Sweden)

    Takeshi Inaoka

    2012-12-01

    Full Text Available Adsorption on an n-type InAs surface often induces a gradual formation of a carrier-accumulation layer at the surface. By means of high-resolution photoelectron spectroscopy (PES, Betti et al. made a systematic observation of subbands in the accumulation layer in the formation process. Incorporating a highly nonparabolic (NP dispersion of the conduction band into the local-density-functional (LDF formalism, we examine the subband structure in the accumulation-layer formation process. Combining the LDF calculation with the PES experiment, we make an accurate evaluation of the accumulated-carrier density, the subband-edge energies, and the subband energy dispersion at each formation stage. Our theoretical calculation can reproduce the three observed subbands quantitatively. The subband dispersion, which deviates downward from that of the projected bulk conduction band with an increase in wave number, becomes significantly weaker in the formation process. Accurate evaluation of the NP subband dispersion at each formation stage is indispensable in making a quantitative analysis of collective electronic excitations and transport properties in the subbands.

  9. A colorimetric method for highly sensitive and accurate detection of iodide by finding the critical color in a color change process using silver triangular nanoplates

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiu-Hua; Ling, Jian, E-mail: lingjian@ynu.edu.cn; Peng, Jun; Cao, Qiu-E., E-mail: qecao@ynu.edu.cn; Ding, Zhong-Tao; Bian, Long-Chun

    2013-10-10

    Graphical abstract: -- Highlights: •Demonstrated a new colorimetric strategy for iodide detection by silver nanoplates. •The colorimetric strategy is to find the critical color in a color change process. •The colorimetric strategy is more accurate and sensitive than common colorimetry. •Discovered a new morphological transformation phenomenon of silver nanoplates. -- Abstract: In this contribution, we demonstrated a novel colorimetric method for highly sensitive and accurate detection of iodide using citrate-stabilized silver triangular nanoplates (silver TNPs). Very lower concentration of iodide can induce an appreciable color change of silver TNPs solution from blue to yellow by fusing of silver TNPs to nanoparticles, as confirmed by UV–vis absorption spectroscopy and transmission electron microscopy (TEM). The principle of this colorimetric assay is not an ordinary colorimetry, but a new colorimetric strategy by finding the critical color in a color change process. With this strategy, 0.1 μM of iodide can be recognized within 30 min by naked-eyes observation, and lower concentration of iodide down to 8.8 nM can be detected using a spectrophotometer. Furthermore, this high sensitive colorimetric assay has good accuracy, stability and reproducibility comparing with other ordinary colorimetry. We believe this new colorimetric method will open up a fresh insight of simple, rapid and reliable detection of iodide and can find its future application in the biochemical analysis or clinical diagnosis.

  10. A colorimetric method for highly sensitive and accurate detection of iodide by finding the critical color in a color change process using silver triangular nanoplates

    International Nuclear Information System (INIS)

    Yang, Xiu-Hua; Ling, Jian; Peng, Jun; Cao, Qiu-E.; Ding, Zhong-Tao; Bian, Long-Chun

    2013-01-01

    Graphical abstract: -- Highlights: •Demonstrated a new colorimetric strategy for iodide detection by silver nanoplates. •The colorimetric strategy is to find the critical color in a color change process. •The colorimetric strategy is more accurate and sensitive than common colorimetry. •Discovered a new morphological transformation phenomenon of silver nanoplates. -- Abstract: In this contribution, we demonstrated a novel colorimetric method for highly sensitive and accurate detection of iodide using citrate-stabilized silver triangular nanoplates (silver TNPs). Very lower concentration of iodide can induce an appreciable color change of silver TNPs solution from blue to yellow by fusing of silver TNPs to nanoparticles, as confirmed by UV–vis absorption spectroscopy and transmission electron microscopy (TEM). The principle of this colorimetric assay is not an ordinary colorimetry, but a new colorimetric strategy by finding the critical color in a color change process. With this strategy, 0.1 μM of iodide can be recognized within 30 min by naked-eyes observation, and lower concentration of iodide down to 8.8 nM can be detected using a spectrophotometer. Furthermore, this high sensitive colorimetric assay has good accuracy, stability and reproducibility comparing with other ordinary colorimetry. We believe this new colorimetric method will open up a fresh insight of simple, rapid and reliable detection of iodide and can find its future application in the biochemical analysis or clinical diagnosis

  11. The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO

    Science.gov (United States)

    Crass, Jonathan; King, David; Mackay, Craig

    2013-12-01

    Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.

  12. Fast reconstruction of high-qubit-number quantum states via low-rate measurements

    Science.gov (United States)

    Li, K.; Zhang, J.; Cong, S.

    2017-07-01

    Due to the exponential complexity of the resources required by quantum state tomography (QST), people are interested in approaches towards identifying quantum states which require less effort and time. In this paper, we provide a tailored and efficient method for reconstructing mixed quantum states up to 12 (or even more) qubits from an incomplete set of observables subject to noises. Our method is applicable to any pure or nearly pure state ρ and can be extended to many states of interest in quantum information processing, such as a multiparticle entangled W state, Greenberger-Horne-Zeilinger states, and cluster states that are matrix product operators of low dimensions. The method applies the quantum density matrix constraints to a quantum compressive sensing optimization problem and exploits a modified quantum alternating direction multiplier method (quantum-ADMM) to accelerate the convergence. Our algorithm takes 8 ,35 , and 226 seconds, respectively, to reconstruct superposition state density matrices of 10 ,11 ,and12 qubits with acceptable fidelity using less than 1 % of measurements of expectation. To our knowledge it is the fastest realization that people can achieve using a normal desktop. We further discuss applications of this method using experimental data of mixed states obtained in an ion trap experiment of up to 8 qubits.

  13. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  14. High throughput and accurate serum proteome profiling by integrated sample preparation technology and single-run data independent mass spectrometry analysis.

    Science.gov (United States)

    Lin, Lin; Zheng, Jiaxin; Yu, Quan; Chen, Wendong; Xing, Jinchun; Chen, Chenxi; Tian, Ruijun

    2018-03-01

    Mass spectrometry (MS)-based serum proteome analysis is extremely challenging due to its high complexity and dynamic range of protein abundances. Developing high throughput and accurate serum proteomic profiling approach capable of analyzing large cohorts is urgently needed for biomarker discovery. Herein, we report a streamlined workflow for fast and accurate proteomic profiling from 1μL of blood serum. The workflow combined an integrated technique for highly sensitive and reproducible sample preparation and a new data-independent acquisition (DIA)-based MS method. Comparing with standard data dependent acquisition (DDA) approach, the optimized DIA method doubled the number of detected peptides and proteins with better reproducibility. Without protein immunodepletion and prefractionation, the single-run DIA analysis enables quantitative profiling of over 300 proteins with 50min gradient time. The quantified proteins span more than five orders of magnitude of abundance range and contain over 50 FDA-approved disease markers. The workflow allowed us to analyze 20 serum samples per day, with about 358 protein groups per sample being identified. A proof-of-concept study on renal cell carcinoma (RCC) serum samples confirmed the feasibility of the workflow for large scale serum proteomic profiling and disease-related biomarker discovery. Blood serum or plasma is the predominant specimen for clinical proteomic studies while the analysis is extremely challenging for its high complexity. Many efforts had been made in the past for serum proteomics for maximizing protein identifications, whereas few have been concerned with throughput and reproducibility. Here, we establish a rapid, robust and high reproducible DIA-based workflow for streamlined serum proteomic profiling from 1μL serum. The workflow doesn't need protein depletion and pre-fractionation, while still being able to detect disease-relevant proteins accurately. The workflow is promising in clinical application

  15. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  16. Update on orbital reconstruction.

    Science.gov (United States)

    Chen, Chien-Tzung; Chen, Yu-Ray

    2010-08-01

    Orbital trauma is common and frequently complicated by ocular injuries. The recent literature on orbital fracture is analyzed with emphasis on epidemiological data assessment, surgical timing, method of approach and reconstruction materials. Computed tomographic (CT) scan has become a routine evaluation tool for orbital trauma, and mobile CT can be applied intraoperatively if necessary. Concomitant serious ocular injury should be carefully evaluated preoperatively. Patients presenting with nonresolving oculocardiac reflex, 'white-eyed' blowout fracture, or diplopia with a positive forced duction test and CT evidence of orbital tissue entrapment require early surgical repair. Otherwise, enophthalmos can be corrected by late surgery with a similar outcome to early surgery. The use of an endoscope-assisted approach for orbital reconstruction continues to grow, offering an alternative method. Advances in alloplastic materials have improved surgical outcome and shortened operating time. In this review of modern orbital reconstruction, several controversial issues such as surgical indication, surgical timing, method of approach and choice of reconstruction material are discussed. Preoperative fine-cut CT image and thorough ophthalmologic examination are key elements to determine surgical indications. The choice of surgical approach and reconstruction materials much depends on the surgeon's experience and the reconstruction area. Prefabricated alloplastic implants together with image software and stereolithographic models are significant advances that help to more accurately reconstruct the traumatized orbit. The recent evolution of orbit reconstruction improves functional and aesthetic results and minimizes surgical complications.

  17. Tracer test method and process data reconciliation based on VDI 2048. Comparison of two methods for highly accurate determination of feedwater massflow at NPP Beznau

    International Nuclear Information System (INIS)

    Hungerbuehler, T.; Langenstein, M.

    2007-01-01

    The feedwater mass flow is the key measured variable used to determine the thermal reactor output in a nuclear power plant. Usually this parameter is recorded via venturi nozzles of orifice plates. The problem with both principles of measurement, however, is that an accuracy of below 1% cannot be reached. In order to make more accurate statements about the feedwater amounts recirculated in the water-steam cycle, tracer measurements that offer an accuracy of up to 0.2% are used. In the NPP Beznau both methods have been used in parallel to determine the feedwater flow rates in 2004 (unit 1) and 2005 (unit 2). Comparison of the results shows that a high level of agreement is obtained between the results of the reconciliation and the results of the tracer measurements. As a result of the findings of this comparison, a high level of acceptance of process data reconciliation based on VDI 2048 was achieved. (orig.)

  18. Ab initio study of the CO-N2 complex: a new highly accurate intermolecular potential energy surface and rovibrational spectrum

    DEFF Research Database (Denmark)

    Cybulski, Hubert; Henriksen, Christian; Dawes, Richard

    2018-01-01

    A new, highly accurate ab initio ground-state intermolecular potential-energy surface (IPES) for the CO-N2 complex is presented. Thousands of interaction energies calculated with the CCSD(T) method and Dunning's aug-cc-pVQZ basis set extended with midbond functions were fitted to an analytical...... function. The global minimum of the potential is characterized by an almost T-shaped structure and has an energy of -118.2 cm-1. The symmetry-adapted Lanczos algorithm was used to compute rovibrational energies (up to J = 20) on the new IPES. The RMSE with respect to experiment was found to be on the order...... of 0.038 cm-1 which confirms the very high accuracy of the potential. This level of agreement is among the best reported in the literature for weakly bound systems and considerably improves on those of previously published potentials....

  19. An integrated tool to study MHC region: accurate SNV detection and HLA genes typing in human MHC region using targeted high-throughput sequencing.

    Directory of Open Access Journals (Sweden)

    Hongzhi Cao

    Full Text Available The major histocompatibility complex (MHC is one of the most variable and gene-dense regions of the human genome. Most studies of the MHC, and associated regions, focus on minor variants and HLA typing, many of which have been demonstrated to be associated with human disease susceptibility and metabolic pathways. However, the detection of variants in the MHC region, and diagnostic HLA typing, still lacks a coherent, standardized, cost effective and high coverage protocol of clinical quality and reliability. In this paper, we presented such a method for the accurate detection of minor variants and HLA types in the human MHC region, using high-throughput, high-coverage sequencing of target regions. A probe set was designed to template upon the 8 annotated human MHC haplotypes, and to encompass the 5 megabases (Mb of the extended MHC region. We deployed our probes upon three, genetically diverse human samples for probe set evaluation, and sequencing data show that ∼97% of the MHC region, and over 99% of the genes in MHC region, are covered with sufficient depth and good evenness. 98% of genotypes called by this capture sequencing prove consistent with established HapMap genotypes. We have concurrently developed a one-step pipeline for calling any HLA type referenced in the IMGT/HLA database from this target capture sequencing data, which shows over 96% typing accuracy when deployed at 4 digital resolution. This cost-effective and highly accurate approach for variant detection and HLA typing in the MHC region may lend further insight into immune-mediated diseases studies, and may find clinical utility in transplantation medicine research. This one-step pipeline is released for general evaluation and use by the scientific community.

  20. Three-dimensional optical reconstruction of vocal fold kinematics using high-speed video with a laser projection system

    Science.gov (United States)

    Luegmair, Georg; Mehta, Daryush D.; Kobler, James B.; Döllinger, Michael

    2015-01-01

    Vocal fold kinematics and its interaction with aerodynamic characteristics play a primary role in acoustic sound production of the human voice. Investigating the temporal details of these kinematics using high-speed videoendoscopic imaging techniques has proven challenging in part due to the limitations of quantifying complex vocal fold vibratory behavior using only two spatial dimensions. Thus, we propose an optical method of reconstructing the superior vocal fold surface in three spatial dimensions using a high-speed video camera and laser projection system. Using stereo-triangulation principles, we extend the camera-laser projector method and present an efficient image processing workflow to generate the three-dimensional vocal fold surfaces during phonation captured at 4000 frames per second. Initial results are provided for airflow-driven vibration of an ex vivo vocal fold model in which at least 75% of visible laser points contributed to the reconstructed surface. The method captures the vertical motion of the vocal folds at a high accuracy to allow for the computation of three-dimensional mucosal wave features such as vibratory amplitude, velocity, and asymmetry. PMID:26087485

  1. Shredded banknotes reconstruction using AKAZE points.

    Science.gov (United States)

    Nabiyev, Vasif V; Yılmaz, Seçkin; Günay, Asuman; Muzaffer, Gül; Ulutaş, Güzin

    2017-09-01

    Shredded banknote reconstruction is a recent topic and can be viewed as solving large-scale jigsaw puzzles. Also, problems such as reconstruction of fragmented documents, photographs and historical artefacts are closely related with this topic. The high computational complexity of these problems increases the need for the development of new methods Reconstruction of shredded banknotes consists of three main stages. (1) Matching fragments with a reference banknote. (2) Aligning the fragments by rotating at certain angles. (3) Assembling the fragments. The existing methods can successfully applied to synthetic banknote fragments which are created in computer environment. But when real banknote reconstruction problem is considered, different sub problems arise and make the existing methods inadequate. In this study, a keypoint based method, named AKAZE, was used to make the matching process effective. This is the first study that uses the AKAZE method in the reconstruction of shredded banknotes. A new method for fragment alignment has also been proposed. In this method, the convex hulls that contain all true matched AKAZE keypoints were found on reference banknote and fragments. The orientations of fragments were estimated accurately by comparing these convex polygons. Also, a new criterion was developed to reveal the success rates of reconstructed banknotes. In addition, two different data sets including real and synthetic banknote fragments of different countries were created to test the success of proposed method. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. High-resolution Atmospheric pCO2 Reconstruction across the Paleogene Using Marine and Terrestrial δ13C records

    Science.gov (United States)

    Cui, Y.; Schubert, B.

    2016-02-01

    The early Paleogene (63 to 47 Ma) is considered to have a greenhouse climate1 with proxies suggesting atmospheric CO2 levels (pCO2) approximately 2× pre-industrial levels. However, the proxy based pCO2 reconstructions are limited and do not allow for assessment of changes in pCO2 at million to sub-million year time scales. It has recently been recognized that changes in C3 land plant carbon isotope fractionation can be used as a proxy for pCO2 with quantifiable uncertainty2. Here, we present a high-resolution pCO2 reconstruction (n = 597) across the early Paleogene using published carbon isotope data from both terrestrial organic matter and marine carbonates. The minimum and maximum pCO2 values reconstructed using this method are broad (i.e., 170 +60/-40 ppmv to 2000 +4480/-1060 ppmv) and reflective of the wide range of environments sampled. However, the large number of measurements allows for a robust estimate of average pCO2 during this time interval ( 400 +260/-120 ppmv), and indicates brief (sub-million-year) excursions to very high pCO2 during hyperthermal events (e.g., the PETM). By binning our high-resolution pCO2 data at 1 million year intervals, we can compare our dataset to the other available pCO2 proxies. Our result is broadly consistent with pCO2 levels reconstructed using other proxies, with the exception of paleosol-based pCO2 estimates spanning 53 to 50 Ma. At this timescale, no proxy suggests pCO2 higher than 2000 ppmv, whereas the global surface ocean temperature is considered to be >10 oC warmer than today. Recent climate modeling suggests that low atmospheric pressure during this time period could help reconcile the apparent disconnect between pCO2 and temperature and contribute to the greenhouse climate3. References1. Huber, M., Caballero, R., 2011. Climate of the Past 7, 603-633. 2. Schubert, B.A., Jahren, A.H., 2015. Geology 43, 435-438. 3. Poulsen, C.J., Tabor, C., White, J.D., 2015. Science 348, 1238-1241.

  3. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants; Spectroscopie atomique et mesures de grande precision: determination de constantes fonfamentales

    Energy Technology Data Exchange (ETDEWEB)

    Schwob, C

    2006-12-15

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm{sup -1}). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10{sup -9} began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is {alpha}{sub -1} = 137.03599884 (91) with a relative uncertainty of 6.7*10{sup -9}. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  4. High-Speed Automatic Microscopy for Real Time Tracks Reconstruction in Nuclear Emulsion

    Science.gov (United States)

    D'Ambrosio, N.

    2006-06-01

    The Oscillation Project with Emulsion-tRacking Apparatus (OPERA) experiment will use a massive nuclear emulsion detector to search for /spl nu//sub /spl mu///spl rarr//spl nu//sub /spl tau// oscillation by identifying /spl tau/ leptons through the direct detection of their decay topology. The feasibility of experiments using a large mass emulsion detector is linked to the impressive progress under way in the development of automatic emulsion analysis. A new generation of scanning systems requires the development of fast automatic microscopes for emulsion scanning and image analysis to reconstruct tracks of elementary particles. The paper presents the European Scanning System (ESS) developed in the framework of OPERA collaboration.

  5. Combined duodenal and pancreatic major trauma in high risk patients: can a partial reconstruction be safe?

    Science.gov (United States)

    Toro, A; Li Destri, G; Mannino, M; Arcerito, M C; Ardiri, A; Politi, A; Bertino, G; Di Carlo, I

    2014-04-01

    Pancreatic trauma is an uncommon injury, occurring in only about 0.2% of blunt abdominal injuries, while duodenal injuries represent approximately 4% of all blunt abdominal injuries. When trauma of the pancreas and duodenum do not permit reparation, pancreatoduodenectomy (PD) is mandatory. In the reconstructive phase, the use of ductal ligation as an alternative to standard pancreaticojejunostomy has been reported by some authors. We report a case of polytrauma with pancreatic and duodenal injury in which the initial diagnosis failed to recognize the catastrophic duodenal and pancreatic situation. The patient was submitted for PD and the pancreatic stump was abandoned in the abdominal cavity after main pancreatic ductal ligation. This technique can minimize the morbidity and mortality of PD in patients with other organs or apparatus involved severely and extensively in trauma.

  6. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    Science.gov (United States)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  7. A comparison of high-resolution pollen-inferred climate data from central Minnesota, USA, to 19th century US military fort climate data and tree-ring inferred climate reconstructions

    Science.gov (United States)

    St Jacques, J.; Cumming, B. F.; Sauchyn, D.; Vanstone, J. R.; Dickenson, J.; Smol, J. P.

    2013-12-01

    -settlement calibration set give much more credible reconstructions. We then compare the temperature reconstructions based upon the two calibration sets for AD 1116-2002. Significant signal flattening and bias exist when using the conventional modern pollen-climate calibration set rather than the pre-settlement pollen-climate calibration set, resulting in an overestimation of Little Ice Age monthly mean temperatures of 0.5-1.5 oC. Therefore, regional warming from anthropogenic global warming is significantly underestimated when using the conventional method of building pollen-climate calibration sets. We also compare the Lake Mina pollen-inferred effective moisture record to early 19th century climate data and to a four-century tree-ring inferred moisture reconstruction based upon sites in Minnesota and the Dakotas. This comparison shows that regional tree-ring reconstructions are biased towards dry conditions and record wet periods poorly relative to high-resolution pollen reconstructions, giving a false impression of regional aridity. It also suggests that varve chronologies should be based upon cross-dating to ensure a more accurate chronology.

  8. The natural limb is best: joint preservation and reconstruction by distraction osteogenesis for high-grade juxta-articular osteosarcomas.

    Science.gov (United States)

    Tsuchiya, Hiroyuki; Abdel-Wanis, Mohamed E; Kitano, Shinji; Sakurakichi, Keisuke; Yamashiro, Teruhisa; Tomita, Katsuro

    2002-01-01

    This paper introduces an innovative technique of highly conservative limb-saving surgery for juxta-articular osteosarcoma. This technique consists of marginal tumour excision, joint preservation and reconstruction by distraction osteogenesis. Ten patients, with a mean age of 19.5 years and high-grade osteosarcoma, underwent this procedure. The distal femur and proximal tibia were affected in five patients each. After effective pre-operative chemotherapy, the tumour was excised with preservation of the epiphysis, the articular surface and the maximun amount of healthy soft tissue. This was followed by application of an external fixator. Bone transport was performed for seven patients and shortening-distraction for three. The limb function was rated excellent in seven patients, good in one and fair in two. At the final follow-up, three patients were dead after a mean of 25.3 months while seven patients remained free of disease with a mean follow-up of 55.4 months. Joint preservation and biological reconstruction through distraction osteogenesis can produce excellent and long-lasting functional results.

  9. Analysis of pixel systematics and space point reconstruction with DEPFET PXD5 matrices using high energy beam test data

    Energy Technology Data Exchange (ETDEWEB)

    Reuen, Lars

    2011-02-15

    To answer the current questions in particle physics vertex-detectors, the innermost sub-detector system of a multipurpose particle detector, with brilliant spatial resolution and at the same time with as little sensor material as possible are mandatory. These requirements are the driving force behind the newest generation of silicon pixel sensors like the DEPFET pixel, which incorporates the first amplification stage in form of a transistor in the fully depleted sensor bulk, allowing for a high spatial resolution even with thinned down sensors. A DEPFET pixel prototype system, build for the future TeV-scale liner collider ILC, was characterized in a high energy beam test at CERN with a spatial resolution and statistics that allowed for the first time in-pixel homogeneity measurements of DEPFET pixels. Yet, in the quest for higher precision the sensor development must be accompanied by progress in position reconstruction algorithms. A study with three novel approaches in position reconstruction was undertaken. The results of the in-pixel beam test and the performance of the new methods with an emphasis on {delta}-electrons will be presented here. (orig.)

  10. Analysis of pixel systematics and space point reconstruction with DEPFET PXD5 matrices using high energy beam test data

    International Nuclear Information System (INIS)

    Reuen, Lars

    2011-02-01

    To answer the current questions in particle physics vertex-detectors, the innermost sub-detector system of a multipurpose particle detector, with brilliant spatial resolution and at the same time with as little sensor material as possible are mandatory. These requirements are the driving force behind the newest generation of silicon pixel sensors like the DEPFET pixel, which incorporates the first amplification stage in form of a transistor in the fully depleted sensor bulk, allowing for a high spatial resolution even with thinned down sensors. A DEPFET pixel prototype system, build for the future TeV-scale liner collider ILC, was characterized in a high energy beam test at CERN with a spatial resolution and statistics that allowed for the first time in-pixel homogeneity measurements of DEPFET pixels. Yet, in the quest for higher precision the sensor development must be accompanied by progress in position reconstruction algorithms. A study with three novel approaches in position reconstruction was undertaken. The results of the in-pixel beam test and the performance of the new methods with an emphasis on δ-electrons will be presented here. (orig.)

  11. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    Science.gov (United States)

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  12. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system....... The aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...... circuit model, based on the runtime battery model and the Thevenin circuit model, with parameters obtained from the tests and depending on SOC, current and temperature has been implemented in MATLAB/Simulink and Power Factory. A good alignment between simulations and measurements has been found....

  13. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  14. Albumin-Bilirubin and Platelet-Albumin-Bilirubin Grades Accurately Predict Overall Survival in High-Risk Patients Undergoing Conventional Transarterial Chemoembolization for Hepatocellular Carcinoma.

    Science.gov (United States)

    Hansmann, Jan; Evers, Maximilian J; Bui, James T; Lokken, R Peter; Lipnik, Andrew J; Gaba, Ron C; Ray, Charles E

    2017-09-01

    To evaluate albumin-bilirubin (ALBI) and platelet-albumin-bilirubin (PALBI) grades in predicting overall survival in high-risk patients undergoing conventional transarterial chemoembolization for hepatocellular carcinoma (HCC). This single-center retrospective study included 180 high-risk patients (142 men, 59 y ± 9) between April 2007 and January 2015. Patients were considered high-risk based on laboratory abnormalities before the procedure (bilirubin > 2.0 mg/dL, albumin 1.2 mg/dL); presence of ascites, encephalopathy, portal vein thrombus, or transjugular intrahepatic portosystemic shunt; or Model for End-Stage Liver Disease score > 15. Serum albumin, bilirubin, and platelet values were used to determine ALBI and PALBI grades. Overall survival was stratified by ALBI and PALBI grades with substratification by Child-Pugh class (CPC) and Barcelona Liver Clinic Cancer (BCLC) stage using Kaplan-Meier analysis. C-index was used to determine discriminatory ability and survival prediction accuracy. Median survival for 79 ALBI grade 2 patients and 101 ALBI grade 3 patients was 20.3 and 10.7 months, respectively (P  .05). ALBI and PALBI grades are accurate survival metrics in high-risk patients undergoing conventional transarterial chemoembolization for HCC. Use of these scores allows for more refined survival stratification within CPC and BCLC stage. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  15. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    Science.gov (United States)

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. Photometric Lunar Surface Reconstruction

    Science.gov (United States)

    Nefian, Ara V.; Alexandrov, Oleg; Morattlo, Zachary; Kim, Taemin; Beyer, Ross A.

    2013-01-01

    Accurate photometric reconstruction of the Lunar surface is important in the context of upcoming NASA robotic missions to the Moon and in giving a more accurate understanding of the Lunar soil composition. This paper describes a novel approach for joint estimation of Lunar albedo, camera exposure time, and photometric parameters that utilizes an accurate Lunar-Lambertian reflectance model and previously derived Lunar topography of the area visualized during the Apollo missions. The method introduced here is used in creating the largest Lunar albedo map (16% of the Lunar surface) at the resolution of 10 meters/pixel.

  17. Cross-validation of two commercial methods for volumetric high-resolution dose reconstruction on a phantom for non-coplanar VMAT beams

    International Nuclear Information System (INIS)

    Feygelman, Vladimir; Stambaugh, Cassandra; Opp, Daniel; Zhang, Geoffrey; Moros, Eduardo G.; Nelms, Benjamin E.

    2014-01-01

    Background and purpose: Delta 4 (ScandiDos AB, Uppsala, Sweden) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL, USA) are commercial quasi-three-dimensional diode dosimetry arrays capable of volumetric measurement-guided dose reconstruction. A method to reconstruct dose for non-coplanar VMAT beams with 3DVH is described. The Delta 4 3D dose reconstruction on its own phantom for VMAT delivery has not been thoroughly evaluated previously, and we do so by comparison with 3DVH. Materials and methods: Reconstructed volumetric doses for VMAT plans delivered with different table angles were compared between the Delta 4 and 3DVH using gamma analysis. Results: The average γ (2% local dose-error normalization/2mm) passing rate comparing the directly measured Delta 4 diode dose with 3DVH was 98.2 ± 1.6% (1SD). The average passing rate for the full volumetric comparison of the reconstructed doses on a homogeneous cylindrical phantom was 95.6 ± 1.5%. No dependence on the table angle was observed. Conclusions: Modified 3DVH algorithm is capable of 3D VMAT dose reconstruction on an arbitrary volume for the full range of table angles. Our comparison results between different dosimeters make a compelling case for the use of electronic arrays with high-resolution 3D dose reconstruction as primary means of evaluating spatial dose distributions during IMRT/VMAT verification

  18. [Arthroscopic double-row reconstruction of high-grade subscapularis tendon tears].

    Science.gov (United States)

    Plachel, F; Pauly, S; Moroder, P; Scheibel, M

    2018-04-01

    Reconstruction of tendon integrity to maintain glenohumeral joint centration and hence to restore shoulder functional range of motion and to reduce pain. Isolated or combined full-thickness subscapularis tendon tears (≥upper two-thirds of the tendon) without both substantial soft tissue degeneration and cranialization of the humeral head. Chronic tears of the subscapularis tendon with higher grade muscle atrophy, fatty infiltration, and static decentration of the humeral head. After arthroscopic three-sided subscapularis tendon release, two double-loaded suture anchors are placed medially to the humeral footprint. Next to the suture passage, the suture limbs are tied and secured laterally with up to two knotless anchors creating a transosseous-equivalent repair. The affected arm is placed in a shoulder brace with 20° of abduction and slight internal rotation for 6 weeks postoperatively. Rehabilitation protocol including progressive physical therapy from a maximum protection phase to a minimum protection phase is required. Overhead activities are permitted after 6 months. While previous studies have demonstrated superior biomechanical properties and clinical results after double-row compared to single-row and transosseous fixation techniques, further mid- to long-term clinical investigations are needed to confirm these findings.

  19. Reconstruction mechanisms of tantalum oxide coatings with low concentrations of silver for high temperature tribological applications

    Energy Technology Data Exchange (ETDEWEB)

    Stone, D. S.; Bischof, M.; Aouadi, S. M., E-mail: samir.aouadi@unt.edu [Department of Material Science and Engineering, University of North Texas, Denton, Texas 76207 (United States); Gao, H.; Martini, A. [School of Engineering, University of California Merced, Merced, California 95343 (United States); Chantharangsi, C.; Paksunchai, C. [Department of Physics, King Mongkut' s University of Technology Thonburi, Bangkok 10140 (Thailand)

    2014-11-10

    Silver tantalate (AgTaO{sub 3}) coatings have been found to exhibit outstanding tribological properties at elevated temperatures. To understand the mechanisms involved in the tribological behavior of the Ag-Ta-O system, tantalum oxide coatings with a small content of silver were produced to investigate the metastable nature of this self-lubricating material. The coatings were produced by unbalanced magnetron sputtering, ball-on-disk wear tested at 750 °C, and subsequently characterized by X-ray diffraction, Scanning Auger Nanoprobe, cross-sectional Scanning Electron Microscopy, and Transmission Electron Microscopy. Complementary molecular dynamic simulations were carried out to investigate changes in the chemical and structural properties at the interface due to sliding for films with varying silver content. Both the experimental characterization and the theoretical modeling showed that silver content affects friction and wear, through the role of silver in film reconstruction during sliding. The results suggest that the relative amount of silver may be used to tune film performance for a given application.

  20. Reconstruction mechanisms of tantalum oxide coatings with low concentrations of silver for high temperature tribological applications

    International Nuclear Information System (INIS)

    Stone, D. S.; Bischof, M.; Aouadi, S. M.; Gao, H.; Martini, A.; Chantharangsi, C.; Paksunchai, C.

    2014-01-01

    Silver tantalate (AgTaO 3 ) coatings have been found to exhibit outstanding tribological properties at elevated temperatures. To understand the mechanisms involved in the tribological behavior of the Ag-Ta-O system, tantalum oxide coatings with a small content of silver were produced to investigate the metastable nature of this self-lubricating material. The coatings were produced by unbalanced magnetron sputtering, ball-on-disk wear tested at 750 °C, and subsequently characterized by X-ray diffraction, Scanning Auger Nanoprobe, cross-sectional Scanning Electron Microscopy, and Transmission Electron Microscopy. Complementary molecular dynamic simulations were carried out to investigate changes in the chemical and structural properties at the interface due to sliding for films with varying silver content. Both the experimental characterization and the theoretical modeling showed that silver content affects friction and wear, through the role of silver in film reconstruction during sliding. The results suggest that the relative amount of silver may be used to tune film performance for a given application

  1. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    International Nuclear Information System (INIS)

    Woods, K; DiCostanzo, D; Gupta, N

    2016-01-01

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  2. SU-F-J-74: High Z Geometric Integrity and Beam Hardening Artifact Assessment Using a Retrospective Metal Artifact Reduction (MAR) Reconstruction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Woods, K; DiCostanzo, D; Gupta, N [Ohio State University Columbus, OH (United States)

    2016-06-15

    Purpose: To test the efficacy of a retrospective metal artifact reduction (MAR) reconstruction algorithm for a commercial computed tomography (CT) scanner for radiation therapy purposes. Methods: High Z geometric integrity and artifact reduction analysis was performed with three phantoms using General Electric’s (GE) Discovery CT. The three phantoms included: a Computerized Imaging Reference Systems (CIRS) electron density phantom (Model 062) with a 6.5 mm diameter titanium rod insert, a custom spine phantom using Synthes Spine hardware submerged in water, and a dental phantom with various high Z fillings submerged in water. Each phantom was reconstructed using MAR and compared against the original scan. Furthermore, each scenario was tested using standard and extended Hounsfield Unit (HU) ranges. High Z geometric integrity was performed using the CIRS phantom, while the artifact reduction was performed using all three phantoms. Results: Geometric integrity of the 6.5 mm diameter rod was slightly overestimated for non-MAR scans for both standard and extended HU. With MAR reconstruction, the rod was underestimated for both standard and extended HU. For artifact reduction, the mean and standard deviation was compared in a volume of interest (VOI) in the surrounding material (water and water equivalent material, ∼0HU). Overall, the mean value of the VOI was closer to 0 HU for the MAR reconstruction compared to the non-MAR scan for most phantoms. Additionally, the standard deviations for all phantoms were greatly reduced using MAR reconstruction. Conclusion: GE’s MAR reconstruction algorithm improves image quality with the presence of high Z material with minimal degradation of its geometric integrity. High Z delineation can be carried out with proper contouring techniques. The effects of beam hardening artifacts are greatly reduced with MAR reconstruction. Tissue corrections due to these artifacts can be eliminated for simple high Z geometries and greatly

  3. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  4. A high order compact least-squares reconstructed discontinuous Galerkin method for the steady-state compressible flows on hybrid grids

    Science.gov (United States)

    Cheng, Jian; Zhang, Fan; Liu, Tiegang

    2018-06-01

    In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.

  5. Optoelectronic properties of XIn2S4 (X = Cd, Mg) thiospinels through highly accurate all-electron FP-LAPW method coupled with modified approximations

    International Nuclear Information System (INIS)

    Yousaf, Masood; Dalhatu, S.A.; Murtaza, G.; Khenata, R.; Sajjad, M.; Musa, A.; Rahnamaye Aliabad, H.A.; Saeed, M.A.

    2015-01-01

    Highlights: • Highly accurate all-electron FP-LAPW+lo method is used. • New physical parameters are reported, important for the fabrication of optoelectronic devices. • A comparative study that involves FP-LAPW+lo method and modified approximations. • Computed band gap values have good agreement with the experimental values. • Optoelectronic results of fundamental importance can be utilized for the fabrication of devices. - Abstract: We report the structural, electronic and optical properties of the thiospinels XIn 2 S 4 (X = Cd, Mg), using highly accurate all-electron full potential linearized augmented plane wave plus local orbital method. In order to calculate the exchange and correlation energies, the method is coupled with modified techniques such as GGA+U and mBJ-GGA, which yield improved results as compared to the previous studies. GGA+SOC approximation is also used for the first time on these compounds to examine the spin orbit coupling effect on the band structure. From the analysis of the structural parameters, robust character is predicted for both materials. Energy band structures profiles are fairly the same for GGA, GGA+SOC, GGA+U and mBJ-GGA, confirming the indirect and direct band gap nature of CdIn 2 S 4 and MgIn 2 S 4 materials, respectively. We report the trend of band gap results as: (mBJ-GGA) > (GGA+U) > (GGA) > (GGA+SOC). Localized regions appearing in the valence bands for CdIn 2 S 4 tend to split up nearly by ≈1 eV in the case of GGA+SOC. Many new physical parameters are reported that can be important for the fabrication of optoelectronic devices. Optical spectra namely, dielectric function (DF), refractive index n(ω), extinction coefficient k(ω), reflectivity R(ω), optical conductivity σ(ω), absorption coefficient α(ω) and electron loss function are discussed. Optical’s absorption edge is noted to be 1.401 and 1.782 for CdIn 2 S 4 and MgIn 2 S 4 , respectively. The prominent peaks in the electron energy spectrum

  6. 3D reconstruction and characterization of carbides in Ni-based high carbon alloy in a FIB-SEM system

    Energy Technology Data Exchange (ETDEWEB)

    Bala, Piotr [AGH Univ. of Science and Technology, Faculty of Metals Engineering and Industrial Computer Science, Krakow (Poland); AGH Univ. of Science and Technology, Academic Centre of Materials and Nanotechnology, Krakow (Poland); Tsyrulin, Katja; Jaksch, Heiner [Carl-Zeiss, Oberkochen (Germany); Stepien, Milena [AGH Univ. of Science and Technology, Academic Centre of Materials and Nanotechnology, Krakow (Poland)

    2015-07-15

    Dual beam focused ion beam scanning electron microscopes (FIB-SEMs) are well suited for characterizing micron and submicron size microstructural features in three dimensions throughout a serial-sectioning experiment. In this article, a FIB-SEM instrument was used to collect morphological, crystallographic, and chemical information for an Ni-Ta-Al-Cr alloy of high carbon content. The alloy has been designed to have excellent tribological properties at elevated temperatures. The morphology, spatial distribution, scale, and degree of interconnection of primary carbides in the Ni-Ta-Al-Cr-C alloy was assessed via serial sectioning in a casting cross-section. The 3D reconstructions showed that the primary carbides and dendrites were forming a dendrite surrounded by primary carbide network over the entire cross-section. Additionally, the morphology and spatial distribution of secondary carbides after heat treatment was determined.

  7. An accurate, specific, sensitive, high-throughput method based on a microsphere immunoassay for multiplex detection of three viruses and bacterial fruit blotch bacterium in cucurbits.

    Science.gov (United States)

    Charlermroj, Ratthaphol; Makornwattana, Manlika; Himananto, Orawan; Seepiban, Channarong; Phuengwas, Sudtida; Warin, Nuchnard; Gajanandana, Oraprapai; Karoonuthaisiri, Nitsara

    2017-09-01

    To employ a microsphere immunoassay (MIA) to simultaneously detect multiple plant pathogens (potyviruses, Watermelon silver mottle virus, Melon yellow spot virus, and Acidovorax avenae subsp. citrulli) in actual plant samples, several factors need to be optimized and rigorously validated. Here, a simple extraction method using a single extraction buffer was successfully selected to detect the four pathogens in various cucurbit samples (cucumber, cantaloupe, melon, and watermelon). The extraction method and assay performance were validated with inoculated and field cucurbit samples. The MIA showed 98-99% relative accuracy, 97-100% relative specificity and 92-100% relative sensitivity when compared to commercial ELISA kits and reverse transcription PCR. In addition, the MIA was also able to accurately detect multiple-infected field samples. The results demonstrate that one common extraction method for all tested cucurbit samples could be applied to detect multiple pathogens; avoiding the need for multiple protocols to be employed. This multiplex method can therefore be instrumental for high-throughput screening of multiple plant pathogens with many advantages such as a shorter assay time (2.5h) with single assay format, a lower cost of detection ($5 vs $19.7 for 4 pathogens/sample) and less labor requirement. Its multiplex capacity can also be expanded to detect up to 50 different pathogens upon the availability of specific antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Precision Timing of PSR J0437-4715: An Accurate Pulsar Distance, a High Pulsar Mass, and a Limit on the Variation of Newton's Gravitational Constant

    Science.gov (United States)

    Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.

    2008-05-01

    Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.

  9. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy

    International Nuclear Information System (INIS)

    Chen, Haibin; Liao, Yuliang; Zhen, Xin; Zhou, Linghong; Zhong, Zichun; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Gu, Xuejun

    2016-01-01

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the ‘thin plate splines-robust point matching’ (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  10. SNP Data Quality Control in a National Beef and Dairy Cattle System and Highly Accurate SNP Based Parentage Verification and Identification

    Directory of Open Access Journals (Sweden)

    Matthew C. McClure

    2018-03-01

    -matching genotypes per animal, SNP duplicates, sex and breed prediction mismatches, parentage and progeny validation results, and other situations. The Animal QC pipeline make use of ICBF800 SNP set where appropriate to identify errors in a computationally efficient yet still highly accurate method.

  11. A non-rigid point matching method with local topology preservation for accurate bladder dose summation in high dose rate cervical brachytherapy.

    Science.gov (United States)

    Chen, Haibin; Zhong, Zichun; Liao, Yuliang; Pompoš, Arnold; Hrycushko, Brian; Albuquerque, Kevin; Zhen, Xin; Zhou, Linghong; Gu, Xuejun

    2016-02-07

    GEC-ESTRO guidelines for high dose rate cervical brachytherapy advocate the reporting of the D2cc (the minimum dose received by the maximally exposed 2cc volume) to organs at risk. Due to large interfractional organ motion, reporting of accurate cumulative D2cc over a multifractional course is a non-trivial task requiring deformable image registration and deformable dose summation. To efficiently and accurately describe the point-to-point correspondence of the bladder wall over all treatment fractions while preserving local topologies, we propose a novel graphic processing unit (GPU)-based non-rigid point matching algorithm. This is achieved by introducing local anatomic information into the iterative update of correspondence matrix computation in the 'thin plate splines-robust point matching' (TPS-RPM) scheme. The performance of the GPU-based TPS-RPM with local topology preservation algorithm (TPS-RPM-LTP) was evaluated using four numerically simulated synthetic bladders having known deformations, a custom-made porcine bladder phantom embedded with twenty one fiducial markers, and 29 fractional computed tomography (CT) images from seven cervical cancer patients. Results show that TPS-RPM-LTP achieved excellent geometric accuracy with landmark residual distance error (RDE) of 0.7  ±  0.3 mm for the numerical synthetic data with different scales of bladder deformation and structure complexity, and 3.7  ±  1.8 mm and 1.6  ±  0.8 mm for the porcine bladder phantom with large and small deformation, respectively. The RDE accuracy of the urethral orifice landmarks in patient bladders was 3.7  ±  2.1 mm. When compared to the original TPS-RPM, the TPS-RPM-LTP improved landmark matching by reducing landmark RDE by 50  ±  19%, 37  ±  11% and 28  ±  11% for the synthetic, porcine phantom and the patient bladders, respectively. This was achieved with a computational time of less than 15 s in all cases

  12. Can optical diagnosis of small colon polyps be accurate? Comparing standard scope without narrow banding to high definition scope with narrow banding.

    Science.gov (United States)

    Ashktorab, Hassan; Etaati, Firoozeh; Rezaeean, Farahnaz; Nouraie, Mehdi; Paydar, Mansour; Namin, Hassan Hassanzadeh; Sanderson, Andrew; Begum, Rehana; Alkhalloufi, Kawtar; Brim, Hassan; Laiyemo, Adeyinka O

    2016-07-28

    To study the accuracy of using high definition (HD) scope with narrow band imaging (NBI) vs standard white light colonoscope without NBI (ST), to predict the histology of the colon polyps, particularly those high definition colonoscopes with NBI. The histopathologic diagnosis was reported by pathologists as part of routine care. Of participants in the study, 55 (37%) were male and median (interquartile range) of age was 56 (19-80). Demographic, clinical characteristics, past medical history of patients, and the data obtained by two instruments were not significantly different and two methods detected similar number of polyps. In ST scope 89% of polyps were scope (P = 0.7). The ST scope had a positive predictive value (PPV) and positive likelihood ratio (PLR) of 86% and 4.0 for adenoma compared to 74% and 2.6 for HD scope. There was a trend of higher sensitivity for HD scope (68%) compare to ST scope (53%) with almost the same specificity. The ST scope had a PPV and PLR of 38% and 1.8 for hyperplastic polyp (HPP) compared to 42% and 2.2 for HD scope. The sensitivity and specificity of two instruments for HPP diagnosis were similar. Our results indicated that HD scope was more sensitive in diagnosis of adenoma than ST scope. Clinical diagnosis of HPP with either scope is less accurate compared to adenoma. Colonoscopy diagnosis is not yet fully matched with pathologic diagnosis of colon polyp. However with the advancement of both imaging and training, it may be possible to increase the sensitivity and specificity of the scopes and hence save money for eliminating time and the cost of Immunohistochemistry/pathology.

  13. PET reconstruction

    International Nuclear Information System (INIS)

    O'Sullivan, F.; Pawitan, Y.; Harrison, R.L.; Lewellen, T.K.

    1990-01-01

    In statistical terms, filtered backprojection can be viewed as smoothed Least Squares (LS). In this paper, the authors report on improvement in LS resolution by: incorporating locally adaptive smoothers, imposing positivity and using statistical methods for optimal selection of the resolution parameter. The resulting algorithm has high computational efficiency relative to more elaborate Maximum Likelihood (ML) type techniques (i.e. EM with sieves). Practical aspects of the procedure are discussed in the context of PET and illustrations with computer simulated and real tomograph data are presented. The relative recovery coefficients for a 9mm sphere in a computer simulated hot-spot phantom range from .3 to .6 when the number of counts ranges from 10,000 to 640,000 respectively. The authors will also present results illustrating the relative efficacy of ML and LS reconstruction techniques

  14. Early one-stage surgical reconstruction of the extremely high vagina in patients with congenital adrenal hyperplasia.

    Science.gov (United States)

    Donahoe, P K; Gustafson, M L

    1994-02-01

    High vaginal atresia is a very rare anomaly seen in the most severely masculinized females with congenital adrenal hyperplasia. These children have a foreshortened vagina conjoining the urogenital sinus proximal to the external urethral sphincter. In the past, they have undergone early clitoral recession and labioscrotal reduction, followed by vaginal pull-through at 2 to 4 years of age. Cumulative experience with repair of this anomaly has led us to attempt earlier one-stage intervention and to develop techniques that circumvent previously encountered vaginal stenoses. One-stage reconstruction of three older children (ages 2 to 9 years) involved: closure of the urethrovaginal fistula, mobilization of the vagina from the rectum and urethra, use of bilateral buttock flaps to augment the anterior vaginal wall, augmentation of the posterior wall with an inverted perineal U flap, clitoral recession, and advancement of labioscrotal and clitoral shaft flaps inferiorly to create labia majora and minora (respectively). The introiti were quite capacious after employing such flaps, did not require postoperative dilatation, and were free of strictures or urethrovaginal fistulae during long-term follow-up. Three younger patients were seen for initial evaluation at 8 to 12 months of age, when early one-stage reconstruction was undertaken. Paradoxically, these repairs were technically less difficult and did not require buttock flap augmentation because an island of anterior perineal skin could be rotated in to reach the anterior vaginal wall. A nerve stimulator was used to identify the external urethral sphincter, while the vagina was aggressively mobilized and advanced forward beyond the site of fistula closure on the urethra to avert formation of a urethro-vaginal fistula.2 +

  15. An innovative seeding technique for photon conversion reconstruction at CMS

    International Nuclear Information System (INIS)

    Giordano, D; Sguazzoni, G

    2012-01-01

    The conversion of photons into electron-positron pairs in the detector material is a nuisance in the event reconstruction of high energy physics experiments, since the measurement of the electromagnetic component of interaction products results degraded. Nonetheless this unavoidable detector effect can also be extremely useful. The reconstruction of photon conversions can be used to probe the detector material and to accurately measure soft photons that come from radiative decays in heavy flavor physics. In fact a converted photon can be measured with very high momentum resolution by exploiting the excellent reconstruction of charged tracks of a tracking detector as the one of CMS at LHC. The main issue is that photon conversion tracks are difficult to reconstruct for standard reconstruction algorithms. They are typically soft and very displaced from the primary interaction vertex. An innovative seeding technique that exploits the peculiar photon conversion topology, successfully applied in the CMS track reconstruction sequence, is presented. The performances of this technique and the substantial enhancement of photon conversion reconstruction efficiency are discussed. Application examples are given.

  16. Accurate solution algorithms for incompressible multiphase flows

    International Nuclear Information System (INIS)

    Rider, W.J.; Kothe, D.B.; Mosso, S.J.; Cerutti, J.H.; Hochstein, J.I.

    1994-01-01

    A number of advances in modeling multiphase incompressible flow are described. These advances include high-order Godunov projection methods, piecewise linear interface reconstruction and tracking and the continuum surface force model. Examples are given

  17. Reducing the effects of metal artefact using high keV monoenergetic reconstruction of dual energy CT (DECT) in hip replacements

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Mark [Norfolk and Norwich University Hospital, Norwich (United Kingdom); Norwich Radiology Academy, Norwich (United Kingdom); Reid, Karen [Norfolk and Norwich University Hospital, Norwich (United Kingdom); Toms, Andoni P. [Norfolk and Norwich University Hospital and University of East Anglia, Norwich (United Kingdom)

    2013-02-15

    The aim of this study was to determine whether high keV monoenergetic reconstruction of dual energy computed tomography (DECT) could be used to overcome the effects of beam hardening artefact that arise from preferential deflection of low energy photons. Two phantoms were used: a Charnley total hip replacement set in gelatine and a Catphan 500. DECT datasets were acquired at 100, 200 and 400 mA (Siemens Definition Flash, 100 and 140 kVp) and reconstructed using a standard combined algorithm (1:1) and then as monoenergetic reconstructions at 10 keV intervals from 40 to 190 keV. Semi-automated segmentation with threshold inpainting was used to obtain the attenuation values and standard deviation (SD) of the streak artefact. High contrast line pair resolution and background noise were assessed using the Catphan 500. Streak artefact is progressively reduced with increasing keV monoenergetic reconstructions. Reconstruction of a 400 mA acquisition at 150 keV results in reduction in the volume of streak artefact from 65 cm{sup 3} to 17 cm{sup 3} (74 %). There was a decrease in the contrast to noise ratio (CNR) at higher tube voltages, with the peak CNR seen at 70-80 keV. High contrast spatial resolution was maintained at high keV values. Monoenergetic reconstruction of dual energy CT at increasing theoretical kilovoltages reduces the streak artefact produced by beam hardening from orthopaedic prostheses, accompanied by a modest increase in heterogeneity of background image attenuation, and decrease in contrast to noise ratio, but no deterioration in high contrast line pair resolution. (orig.)

  18. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    International Nuclear Information System (INIS)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn

    2014-01-01

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  19. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  20. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    Science.gov (United States)

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  1. Measurements of angles of the normal auditory ossicles relative to the reference plane and image reconstruction technique for obtaining optimal sections of the ossicles in high-resolution multiplanar reconstruction using a multislice CT scanner

    International Nuclear Information System (INIS)

    Fujii, Naoko; Katada, Kazuhiro; Yoshioka, Satoshi; Takeuchi, Kenji; Takasu, Akihiko; Naito, Kensei

    2005-01-01

    Using high-resolution isotropic volume data obtained by 0.5 mm, 4-row multislice CT, cross-sectional observation of the auditory ossicles is possible from any desired direction without difficulty in high-resolution multiplanar reconstruction (HR-MPR) images, also distortion-free three-dimensional images of the ossicles are generated in three-dimensional CT (3D-CT) images. We measured angles of fifty normal ossicles relative to the reference plane, which has been defined as a plane through the bilateral infraorbital margins to the middle portion of the external auditory canal. Based on the results of angle measurement, four optimal sections of the ossicles for efficient viewing to the ossicular chain were identified. To understand the position of the angle measurement and the four sections, the ossicles and the reference plane were reconstructed in the 3D-CT images. As the result of observation of the ossicles and the reference plane, the malleus was parallel to the incudal long process and perpendicular to the reference plane. As the results of angle measurement, the mean angle of the tympanic portion of the facial nerve relative to the reference plane in the sagittal plane was found to be 17 deg, and the mean angle of the stapedial crura relative to the reference plane in the sagittal plane was found to be 6 deg. The mean angle of the stapes relative to the reference plane in the coronal plane was 44 deg, and the mean angle of the incudal long process relative to the stapes in the coronal plane was 89 deg. In 80% of ears, the stapes extended straight from the incudal long process. Image reconstruction technique for viewing four sections of the ossicles was investigated. Firstly, the image of the malleal head and the incudal short process was identified in the axial plane. Secondly, an image of the malleus along the malleal manubrium was reconstructed in the coronal plane. Thirdly, the image of the incudal long process was seen immediately behind the malletis image

  2. High-QE fast-readout wavefront sensor with analog phase reconstruction

    Science.gov (United States)

    Baker, Jeffrey T.; Loos, Gary C.; Restaino, Sergio R.; Percheron, Isabelle; Finkner, Lyle G.

    1998-09-01

    The contradiction inherent in high temporal bandwidth adaptive optics wavefront sensing at low-light-levels (LLL) has driven many researchers to consider the use of high bandwidth high quantum efficiency (QE) CCD cameras with the lowest possible readout noise levels. Unfortunately, the performance of these relatively expensive and low production volume devices in the photon counting regime is inevitably limited by readout noise, no matter how arbitrarily close to zero that specification may be reduced. Our alternative approach is to optically couple a new and relatively inexpensive Ultra Blue Gen III image intensifier to an also relatively inexpensive high bandwidth CCD camera with only moderate QE and high rad noise. The result is a high bandwidth broad spectral response image intensifier with a gain of 55,000 at 560 nm. Use of an appropriately selected lenslet array together with coupling optics generates 16 X 16 Shack-Hartmann type subapertures on the image intensifier photocathode, which is imaged onto the fast CCD camera. An integral A/D converter in the camera sends the image data pixel by pixel to a computer data acquisition system for analysis, storage and display. Timing signals are used to decode which pixel is being rad out and the wavefront is calculated in an analog fashion using a least square fit to both x and y tilt data for all wavefront sensor subapertures. Finally, we present system level performance comparisons of these new concept wavefront sensors versus the more standard low noise CCD camera based designs in the low-light-level limit.

  3. Dynamic surface self-reconstruction is the key of highly active perovskite nano-electrocatalysts for water splitting

    Science.gov (United States)

    Fabbri, Emiliana; Nachtegaal, Maarten; Binninger, Tobias; Cheng, Xi; Kim, Bae-Jung; Durst, Julien; Bozza, Francesco; Graule, Thomas; Schäublin, Robin; Wiles, Luke; Pertoso, Morgan; Danilovic, Nemanja; Ayers, Katherine E.; Schmidt, Thomas J.

    2017-09-01

    The growing need to store increasing amounts of renewable energy has recently triggered substantial R&D efforts towards efficient and stable water electrolysis technologies. The oxygen evolution reaction (OER) occurring at the electrolyser anode is central to the development of a clean, reliable and emission-free hydrogen economy. The development of robust and highly active anode materials for OER is therefore a great challenge and has been the main focus of research. Among potential candidates, perovskites have emerged as promising OER electrocatalysts. In this study, by combining a scalable cutting-edge synthesis method with time-resolved X-ray absorption spectroscopy measurements, we were able to capture the dynamic local electronic and geometric structure during realistic operando conditions for highly active OER perovskite nanocatalysts. Ba0.5Sr0.5Co0.8Fe0.2O3-δ as nano-powder displays unique features that allow a dynamic self-reconstruction of the material’s surface during OER, that is, the growth of a self-assembled metal oxy(hydroxide) active layer. Therefore, besides showing outstanding performance at both the laboratory and industrial scale, we provide a fundamental understanding of the operando OER mechanism for highly active perovskite catalysts. This understanding significantly differs from design principles based on ex situ characterization techniques.

  4. Low Overpotential and High Current CO2 Reduction with Surface Reconstructed Cu Foam Electrodess

    KAUST Repository

    Min, Shixiong; Yang, Xiulin; Lu, Ang-Yu; Tseng, Chien-Chih; Hedhili, Mohamed N.; Li, Lain-Jong; Huang, Kuo-Wei

    2016-01-01

    for large-scale fuel synthesis. Here we report an extremely high current density for CO2 reduction at low overpotential using a Cu foam electrode prepared by air-oxidation and subsequent electroreduction. Apart from possessing three-dimensional (3D) open

  5. Reconstructing random media

    International Nuclear Information System (INIS)

    Yeong, C.L.; Torquato, S.

    1998-01-01

    We formulate a procedure to reconstruct the structure of general random heterogeneous media from limited morphological information by extending the methodology of Rintoul and Torquato [J. Colloid Interface Sci. 186, 467 (1997)] developed for dispersions. The procedure has the advantages that it is simple to implement and generally applicable to multidimensional, multiphase, and anisotropic structures. Furthermore, an extremely useful feature is that it can incorporate any type and number of correlation functions in order to provide as much morphological information as is necessary for accurate reconstruction. We consider a variety of one- and two-dimensional reconstructions, including periodic and random arrays of rods, various distribution of disks, Debye random media, and a Fontainebleau sandstone sample. We also use our algorithm to construct heterogeneous media from specified hypothetical correlation functions, including an exponentially damped, oscillating function as well as physically unrealizable ones. copyright 1998 The American Physical Society

  6. An ultra short pulse reconstruction software applied to the GEMINI high power laser system

    Energy Technology Data Exchange (ETDEWEB)

    Galletti, Mario, E-mail: mario.gall22@gmail.com [INFN – LNF, Via Enrico Fermi 40, 00044 Frascati (Italy); Galimberti, Marco [Central Laser Facility, Rutherford Appleton Laboratory, Didcot (United Kingdom); Hooker, Chris [Central Laser Facility, Rutherford Appleton Laboratory, Didcot (United Kingdom); University of Oxford, Oxford (United Kingdom); Chekhlov, Oleg; Tang, Yunxin [Central Laser Facility, Rutherford Appleton Laboratory, Didcot (United Kingdom); Bisesto, Fabrizio Giuseppe [INFN – LNF, Via Enrico Fermi 40, 00044 Frascati (Italy); Curcio, Alessandro [INFN – LNF, Via Enrico Fermi 40, 00044 Frascati (Italy); Sapienza – University of Rome, P.le Aldo Moro, 2, 00185 Rome (Italy); Anania, Maria Pia [INFN – LNF, Via Enrico Fermi 40, 00044 Frascati (Italy); Giulietti, Danilo [Physics Department of the University and INFN, Pisa (Italy)

    2016-09-01

    The GRENOUILLE traces of Gemini pulses (15 J, 30 fs, PW, shot per 20 s) were acquired in the Gemini Target Area PetaWatt at the Central Laser Facility (CLF), Rutherford Appleton Laboratory (RAL). A comparison between the characterizations of the laser pulse parameters made using two different types of algorithms: Video Frog and GRenouille/FrOG (GROG), was made. The temporal and spectral parameters came out to be in great agreement for the two kinds of algorithms. In this experimental campaign it has been showed how GROG, the developed algorithm, works as well as VideoFrog algorithm with the PetaWatt pulse class. - Highlights: • Integration of the diagnostic tool on high power laser. • Validation of the GROG algorithm in comparison to a well-known commercial available software. • Complete characterization of the GEMINI ultra-short high power laser pulse.

  7. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry.

    Science.gov (United States)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe; Linnet, Kristian; Barron, Leon Patrick

    2018-03-23

    Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect and non-targeted screening. These allow for tentative identification of new compounds, and in-silico predicted reference values are used for improving confidence and filtering false-positive identifications. In this work, predictions of both RT and CCS values are performed with machine learning using artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model was investigated for the first time. The optimized combined RT-CCS model was a four-layered multi-layer perceptron ANN, and the 95th prediction error percentiles were within 2 min RT error and 5% relative CCS error for the external validation set (n = 36) and the full RT-CCS dataset (n = 357). 88.6% (n = 733) of predicted RTs were within 2 min error for the full dataset. Overall, when using 2 min RT error and 5% relative CCS error, 91.9% (n = 328) of compounds were retained, while 99.4% (n = 355) were retained when using at least one of these thresholds. This combined prediction approach can therefore be useful for rapid suspect/non-targeted screening involving HRMS, and will support current workflows. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Offline Reconstruction Algorithms for the CMS High Granularity Calorimeter for HL-LHC

    CERN Document Server

    Chen, Z; Meschi, Emilio; Scott, Edward John Titman; Seez, Christopher

    2017-01-01

    The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of $7.5 \\times 10^{34}$ cm$^{-2}$ s$^{-1}$ (levelled), at theCollaboration price of extreme pileup of up to 200 interactions per crossing. Such extreme pileup poses significant challenges, in particular for forward calorimetry. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. It features unprecedented transverse and longitudinal segmentation for both electromagnetic and hadronic compartments. The electromagnetic and a large fraction of the hadronic portions will be based on hexagonal silicon sensors of 0.5 - 1 cm$^2$ cell size, with the remainder of the hadronic portion based on highly-segmented scintillators with SiPM readout. Offline clustering algorithms that make use of this extreme granularity require novel approaches to preserve the fine structure of showers and to be stable against pileup, wh...

  9. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  10. Software compensation in Particle Flow reconstruction

    CERN Document Server

    Lan Tran, Huong; Sefkow, Felix; Green, Steven; Marshall, John; Thomson, Mark; Simon, Frank

    2017-01-01

    The Particle Flow approach to calorimetry requires highly granular calorimeters and sophisticated software algorithms in order to reconstruct and identify individual particles in complex event topologies. The high spatial granularity, together with analog energy information, can be further exploited in software compensation. In this approach, the local energy density is used to discriminate electromagnetic and purely hadronic sub-showers within hadron showers in the detector to improve the energy resolution for single particles by correcting for the intrinsic non-compensation of the calorimeter system. This improvement in the single particle energy resolution also results in a better overall jet energy resolution by improving the energy measurement of identified neutral hadrons and improvements in the pattern recognition stage by a more accurate matching of calorimeter energies to tracker measurements. This paper describes the software compensation technique and its implementation in Particle Flow reconstruct...

  11. Flexible event reconstruction software chains with the ALICE High-Level Trigger

    International Nuclear Information System (INIS)

    Ram, D; Breitner, T; Szostak, A

    2012-01-01

    The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.

  12. Reconstruction of the electron energy distribution function from probe characteristics at intermediate and high pressures

    International Nuclear Information System (INIS)

    Arslanbekov, R.R.; Kolokolov, N.B.; Kudryavtsev, A.A.; Khromov, N.A.

    1991-01-01

    Gorbunov et al. have developed a kinetic theory of the electron current drawn by a probe, which substantially extends the region of applicability of the probe method for determining the electron energy distribution function, enabling probes to be used for intermediate and high pressures (up to p ≤ 0.5 atm for monatomic gases). They showed that for λ var-epsilon >> a + d (where a is the probe radius, d is the sheath thickness, and λ var-epsilon is the electron energy relaxation length) the current density j e (V) drawn by the probe is related to the unperturbed distribution function by an integral equation involving the distribution function. The kernal of the integral equation can be written as a function of the diffusion parameter. In the present paper the method of quadrature sums is employed in order to obtain the electron energy distribution function from probe characteristics at intermediate and high pressures. This technique enables them to recover the distribution function from the integral equation when the diffusion parameter has an arbitrary energy dependence ψ 0 (var-epsilon) in any given energy range. The effectiveness of the method is demonstrated by application to both model problems and experimental data

  13. Computed tomographic reconstruction of beam profiles with a multi-wire chamber

    International Nuclear Information System (INIS)

    Alonso, J.R.; Tobias, C.A.; Chu, W.T.

    1979-03-01

    MEDUSA (MEdical Dose Uniformity SAmpler), a 16 plane multi-wire proportional chamber, has been built to accurately measure beam profiles. The large number of planes allows for reconstruction of highly detailed beam intensity structures by means of Fourier convolution reconstruction techniques. This instrument is being used for verification and tuning of the Bevalac radiotherapy beams, but has potential applications in many beam profile monitoring situations

  14. High-resolution whole-brain DCE-MRI using constrained reconstruction: Prospective clinical evaluation in brain tumor patients

    International Nuclear Information System (INIS)

    Guo, Yi; Zhu, Yinghua; Lingala, Sajan Goud; Nayak, Krishna; Lebel, R. Marc; Shiroishi, Mark S.; Law, Meng

    2016-01-01

    Purpose: To clinically evaluate a highly accelerated T1-weighted dynamic contrast-enhanced (DCE) MRI technique that provides high spatial resolution and whole-brain coverage via undersampling and constrained reconstruction with multiple sparsity constraints. Methods: Conventional (rate-2 SENSE) and experimental DCE-MRI (rate-30) scans were performed 20 minutes apart in 15 brain tumor patients. The conventional clinical DCE-MRI had voxel dimensions 0.9 × 1.3 × 7.0 mm 3 , FOV 22 × 22 × 4.2 cm 3 , and the experimental DCE-MRI had voxel dimensions 0.9 × 0.9 × 1.9 mm 3 , and broader coverage 22 × 22 × 19 cm 3 . Temporal resolution was 5 s for both protocols. Time-resolved images and blood–brain barrier permeability maps were qualitatively evaluated by two radiologists. Results: The experimental DCE-MRI scans showed no loss of qualitative information in any of the cases, while achieving substantially higher spatial resolution and whole-brain spatial coverage. Average qualitative scores (from 0 to 3) were 2.1 for the experimental scans and 1.1 for the conventional clinical scans. Conclusions: The proposed DCE-MRI approach provides clinically superior image quality with higher spatial resolution and coverage than currently available approaches. These advantages may allow comprehensive permeability mapping in the brain, which is especially valuable in the setting of large lesions or multiple lesions spread throughout the brain.

  15. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  16. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    International Nuclear Information System (INIS)

    Silva, Goncalo; Talon, Laurent; Ginzburg, Irina

    2017-01-01

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  17. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)

    2017-04-15

    and FEM is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.

  18. Low resolution scans can provide a sufficiently accurate, cost- and time-effective alternative to high resolution scans for 3D shape analyses

    Directory of Open Access Journals (Sweden)

    Ariel E. Marcy

    2018-06-01

    Full Text Available Background Advances in 3D shape capture technology have made powerful shape analyses, such as geometric morphometrics, more feasible. While the highly accurate micro-computed tomography (µCT scanners have been the “gold standard,” recent improvements in 3D surface scanners may make this technology a faster, portable, and cost-effective alternative. Several studies have already compared the two devices but all use relatively large specimens such as human crania. Here we perform shape analyses on Australia’s smallest rodent to test whether a 3D scanner produces similar results to a µCT scanner. Methods We captured 19 delicate mouse (Pseudomys delicatulus crania with a µCT scanner and a 3D scanner for geometric morphometrics. We ran multiple Procrustes ANOVAs to test how variation due to scan device compared to other sources such as biologically relevant variation and operator error. We quantified operator error as levels of variation and repeatability. Further, we tested if the two devices performed differently at classifying individuals based on sexual dimorphism. Finally, we inspected scatterplots of principal component analysis (PCA scores for non-random patterns. Results In all Procrustes ANOVAs, regardless of factors included, differences between individuals contributed the most to total variation. The PCA plots reflect this in how the individuals are dispersed. Including only the symmetric component of shape increased the biological signal relative to variation due to device and due to error. 3D scans showed a higher level of operator error as evidenced by a greater spread of their replicates on the PCA, a higher level of multivariate variation, and a lower repeatability score. However, the 3D scan and µCT scan datasets performed identically in classifying individuals based on intra-specific patterns of sexual dimorphism. Discussion Compared to µCT scans, we find that even low resolution 3D scans of very small specimens are

  19. High collimated coherent illumination for reconstruction of digitally calculated holograms: design and experimental realization

    Science.gov (United States)

    Morozov, Alexander; Dubinin, German; Dubynin, Sergey; Yanusik, Igor; Kim, Sun Il; Choi, Chil-Sung; Song, Hoon; Lee, Hong-Seok; Putilin, Andrey; Kopenkin, Sergey; Borodin, Yuriy

    2017-06-01

    Future commercialization of glasses-free holographic real 3D displays requires not only appropriate image quality but also slim design of backlight unit and whole display device to match market needs. While a lot of research aimed to solve computational issues of forming Computer Generated Holograms for 3D Holographic displays, less focus on development of backlight units suitable for 3D holographic display applications with form-factor of conventional 2D display systems. Thereby, we report coherent backlight unit for 3D holographic display with thickness comparable to commercially available 2D displays (cell phones, tablets, laptops, etc.). Coherent backlight unit forms uniform, high-collimated and effective illumination of spatial light modulator. Realization of such backlight unit is possible due to holographic optical elements, based on volume gratings, constructing coherent collimated beam to illuminate display plane. Design, recording and measurement of 5.5 inch coherent backlight unit based on two holographic optical elements are presented in this paper.

  20. Reconstruction of the proximal humerus with a composite of extracorporeally irradiated bone and endoprosthesis following excision of high grade primary bone sarcomas.

    Science.gov (United States)

    Moran, Matthew; Stalley, Paul D

    2009-10-01

    Functional reconstruction of the shoulder joint following excision of a malignant proximal humeral tumour is a difficult proposition. Eleven patients with primary osteosarcoma or Ewing's sarcoma underwent reconstruction with a composite of extra-corporeally irradiated autograft with the addition of a long stemmed hemiarthroplasty. At a mean follow-up of 5.8 years two patients had died from disseminated disease and one patient had undergone amputation for local recurrence. The eight patients with a surviving limb were examined clinically and radiographically. The mean Toronto Extremity Salvage Score was 74 and Musculo-Skeletal Tumour Society score 66. Rotation was well preserved but abduction (mean 32 degrees ) and flexion (40 degrees ) were poor. There was a high rate of secondary surgery, with five out of eleven patients requiring re-operation for complications of reconstruction surgery. Radiographic estimate of graft remaining at follow up was 71%. There were no infections, revisions or radiographic failures. Whilst the reconstructions were durable in the medium term, the functional outcome was no better than with other reported reconstructive methods. The composite technique was especially useful in subtotal humeral resections, allowing preservation of the elbow joint even with very distal osteotomy. Bone stock is restored, which may be useful for future revision surgery in this young group of patients.

  1. Implicit vessel surface reconstruction for visualization and CFD simulation

    International Nuclear Information System (INIS)

    Schumann, Christian; Peitgen, Heinz-Otto; Neugebauer, Mathias; Bade, Ragnar; Preim, Bernhard

    2008-01-01

    Accurate and high-quality reconstructions of vascular structures are essential for vascular disease diagnosis and blood flow simulations.These applications necessitate a trade-off between accuracy and smoothness. An additional requirement for the volume grid generation for Computational Fluid Dynamics (CFD) simulations is a high triangle quality. We propose a method that produces an accurate reconstruction of the vessel surface with satisfactory surface quality. A point cloud representing the vascular boundary is generated based on a segmentation result. Thin vessels are subsampled to enable an accurate reconstruction. A signed distance field is generated using Multi-level Partition of Unity Implicits and subsequently polygonized using a surface tracking approach. To guarantee a high triangle quality, the surface is remeshed. Compared to other methods, our approach represents a good trade-off between accuracy and smoothness. For the tested data, the average surface deviation to the segmentation results is 0.19 voxel diagonals and the maximum equi-angle skewness values are below 0.75. The generated surfaces are considerably more accurate than those obtained using model-based approaches. Compared to other model-free approaches, the proposed method produces smoother results and thus better supports the perception and interpretation of the vascular topology. Moreover, the triangle quality of the generated surfaces is suitable for CFD simulations. (orig.)

  2. Qualitative and quantitative improvements of PET reconstruction on GPU architecture

    International Nuclear Information System (INIS)

    Autret, Awen

    2016-01-01

    In positron emission tomography, reconstructed images suffer from a high noise level and a low resolution. Iterative reconstruction processes require an estimation of the system response (scanner and patient) and the quality of the images depends on the accuracy of this estimate. Accurate and fast to compute models already exists for the attenuation, scattering, random coincidences and dead times. Thus, this thesis focuses on modeling the system components associated with the detector response and the positron range. A new multi-GPU parallelization of the reconstruction based on a cutting of the volume is also proposed to speed up the reconstruction exploiting the computing power of such architectures. The proposed detector response model is based on a multi-ray approach that includes all the detector effects as the geometry and the scattering in the crystals. An evaluation study based on data obtained through Mote Carlo simulation (MCS) showed this model provides reconstructed images with a better contrast to noise ratio and resolution compared with those of the methods from the state of the art. The proposed positron range model is based on a simplified MCS, integrated into the forward projector during the reconstruction. A GPU implementation of this method allows running MCS three order of magnitude faster than the same simulation on GATE, while providing similar results. An evaluation study shows this model integrated in the reconstruction gives images with better contrast recovery and resolution while avoiding artifacts. (author)

  3. New var reconstruction algorithm exposes high var sequence diversity in a single geographic location in Mali.

    Science.gov (United States)

    Dara, Antoine; Drábek, Elliott F; Travassos, Mark A; Moser, Kara A; Delcher, Arthur L; Su, Qi; Hostelley, Timothy; Coulibaly, Drissa; Daou, Modibo; Dembele, Ahmadou; Diarra, Issa; Kone, Abdoulaye K; Kouriba, Bourema; Laurens, Matthew B; Niangaly, Amadou; Traore, Karim; Tolo, Youssouf; Fraser, Claire M; Thera, Mahamadou A; Djimde, Abdoulaye A; Doumbo, Ogobara K; Plowe, Christopher V; Silva, Joana C

    2017-03-28

    Encoded by the var gene family, highly variable Plasmodium falciparum erythrocyte membrane protein-1 (PfEMP1) proteins mediate tissue-specific cytoadherence of infected erythrocytes, resulting in immune evasion and severe malaria disease. Sequencing and assembling the 40-60 var gene complement for individual infections has been notoriously difficult, impeding molecular epidemiological studies and the assessment of particular var elements as subunit vaccine candidates. We developed and validated a novel algorithm, Exon-Targeted Hybrid Assembly (ETHA), to perform targeted assembly of var gene sequences, based on a combination of Pacific Biosciences and Illumina data. Using ETHA, we characterized the repertoire of var genes in 12 samples from uncomplicated malaria infections in children from a single Malian village and showed them to be as genetically diverse as vars from isolates from around the globe. The gene var2csa, a member of the var family associated with placental malaria pathogenesis, was present in each genome, as were vars previously associated with severe malaria. ETHA, a tool to discover novel var sequences from clinical samples, will aid the understanding of malaria pathogenesis and inform the design of malaria vaccines based on PfEMP1. ETHA is available at: https://sourceforge.net/projects/etha/ .

  4. Accuracy improvement of CT reconstruction using tree-structured filter bank

    International Nuclear Information System (INIS)

    Ueda, Kazuhiro; Morimoto, Hiroaki; Morikawa, Yoshitaka; Murakami, Junichi

    2009-01-01

    Accuracy improvement of 'CT reconstruction algorithm using TSFB (Tree-Structured Filter Bank)' that is high-speed CT reconstruction algorithm, was proposed. TSFB method could largely reduce the amount of computation in comparison with the CB (Convolution Backprojection) method, but it was the problem that an artifact occurred in a reconstruction image since reconstruction was performed with disregard to a signal out of the reconstruction domain in stage processing. Also the whole band filter being the component of a two-dimensional synthesis filter was IIR filter and then an artifact occurred at the end of the reconstruction image. In order to suppress these artifacts the proposed method enlarged the processing range by the TSFB method in the domain outside by the width control of the specimen line and line addition to the reconstruction domain outside. And, furthermore, to avoid increase of the amount of computation, the algorithm was proposed such as to decide the needed processing range depending on the number of steps processing with the TSFB and the degree of incline of filter, and then update the position and width of the specimen line to process the needed range. According to the simulation to realize a high-speed and highly accurate CT reconstruction in this way, the quality of the reconstruction image of the proposed method was improved in comparison with the TSFB method and got the same result with the CB method. (T. Tanaka)

  5. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  6. Realizations of highly heterogeneous collagen networks via stochastic reconstruction for micromechanical analysis of tumor cell invasion

    Science.gov (United States)

    Nan, Hanqing; Liang, Long; Chen, Guo; Liu, Liyu; Liu, Ruchuan; Jiao, Yang

    2018-03-01

    Three-dimensional (3D) collective cell migration in a collagen-based extracellular matrix (ECM) is among one of the most significant topics in developmental biology, cancer progression, tissue regeneration, and immune response. Recent studies have suggested that collagen-fiber mediated force transmission in cellularized ECM plays an important role in stress homeostasis and regulation of collective cellular behaviors. Motivated by the recent in vitro observation that oriented collagen can significantly enhance the penetration of migrating breast cancer cells into dense Matrigel which mimics the intravasation process in vivo [Han et al. Proc. Natl. Acad. Sci. USA 113, 11208 (2016), 10.1073/pnas.1610347113], we devise a procedure for generating realizations of highly heterogeneous 3D collagen networks with prescribed microstructural statistics via stochastic optimization. Specifically, a collagen network is represented via the graph (node-bond) model and the microstructural statistics considered include the cross-link (node) density, valence distribution, fiber (bond) length distribution, as well as fiber orientation distribution. An optimization problem is formulated in which the objective function is defined as the squared difference between a set of target microstructural statistics and the corresponding statistics for the simulated network. Simulated annealing is employed to solve the optimization problem by evolving an initial network via random perturbations to generate realizations of homogeneous networks with randomly oriented fibers, homogeneous networks with aligned fibers, heterogeneous networks with a continuous variation of fiber orientation along a prescribed direction, as well as a binary system containing a collagen region with aligned fibers and a dense Matrigel region with randomly oriented fibers. The generation and propagation of active forces in the simulated networks due to polarized contraction of an embedded ellipsoidal cell and a small group

  7. Metasequoia glyptostroboides and its Utility in Paleoecological Reconstruction of Eocene High Latitude Forests

    Science.gov (United States)

    Williams, C. J.; LePage, B. A.; Vann, D. R.; Johnson, A. H.

    2001-05-01

    . branch diameter (r2 = 0.91) for living Metasequoia and branch diameters of the Eocene trees, branch biomass of the Eocene trees was estimated to be 28 Mg ha-1 dry weight and foliar biomass (and annual foliar production for this deciduous conifer) of fossil Metasequoia was estimated to be 3.5 Mg ha-1 dry weight. Total standing biomass of the fossil forest was estimated to be 591 Mg ha-1 dry weight. On a stand-average basis, the annual ring width of the trees we sampled equaled 1.3 mm. Based on this ring width our preliminary estimate for the aboveground net primary productivity (NPP) of these forests is 5.9 Mg ha-1yr^{-1}$ (foliage production plus wood production). Thus, these were high biomass forests with moderate productivity typical of modern cool temperate forests similar in stature and total biomass to the modern old-growth forests of the Pacific Northwest (USA).

  8. Breaking the Crowther limit: Combining depth-sectioning and tilt tomography for high-resolution, wide-field 3D reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Hovden, Robert, E-mail: rmh244@cornell.edu [School of Applied and Engineering Physics and Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY 14853 (United States); Ercius, Peter [National Center for Electron Microscopy, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Jiang, Yi [Department of Physics, Cornell University, Ithaca, NY 14853 (United States); Wang, Deli; Yu, Yingchao; Abruña, Héctor D. [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, NY 14853 (United States); Elser, Veit [Department of Physics, Cornell University, Ithaca, NY 14853 (United States); Muller, David A. [School of Applied and Engineering Physics and Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY 14853 (United States)

    2014-05-01

    To date, high-resolution (<1 nm) imaging of extended objects in three-dimensions (3D) has not been possible. A restriction known as the Crowther criterion forces a tradeoff between object size and resolution for 3D reconstructions by tomography. Further, the sub-Angstrom resolution of aberration-corrected electron microscopes is accompanied by a greatly diminished depth of field, causing regions of larger specimens (>6 nm) to appear blurred or missing. Here we demonstrate a three-dimensional imaging method that overcomes both these limits by combining through-focal depth sectioning and traditional tilt-series tomography to reconstruct extended objects, with high-resolution, in all three dimensions. The large convergence angle in aberration corrected instruments now becomes a benefit and not a hindrance to higher quality reconstructions. A through-focal reconstruction over a 390 nm 3D carbon support containing over 100 dealloyed and nanoporous PtCu catalyst particles revealed with sub-nanometer detail the extensive and connected interior pore structure that is created by the dealloying instability. - Highlights: • Develop tomography technique for high-resolution and large field of view. • We combine depth sectioning with traditional tilt tomography. • Through-focal tomography reduces tilts and improves resolution. • Through-focal tomography overcomes the fundamental Crowther limit. • Aberration-corrected becomes a benefit and not a hindrance for tomography.

  9. DEM RECONSTRUCTION USING LIGHT FIELD AND BIDIRECTIONAL REFLECTANCE FUNCTION FROM MULTI-VIEW HIGH RESOLUTION SPATIAL IMAGES

    Directory of Open Access Journals (Sweden)

    F. de Vieilleville

    2016-06-01

    Full Text Available This paper presents a method for dense DSM reconstruction from high resolution, mono sensor, passive imagery, spatial panchromatic image sequence. The interest of our approach is four-fold. Firstly, we extend the core of light field approaches using an explicit BRDF model from the Image Synthesis community which is more realistic than the Lambertian model. The chosen model is the Cook-Torrance BRDF which enables us to model rough surfaces with specular effects using specific material parameters. Secondly, we extend light field approaches for non-pinhole sensors and non-rectilinear motion by using a proper geometric transformation on the image sequence. Thirdly, we produce a 3D volume cost embodying all the tested possible heights and filter it using simple methods such as Volume Cost Filtering or variational optimal methods. We have tested our method on a Pleiades image sequence on various locations with dense urban buildings and report encouraging results with respect to classic multi-label methods such as MIC-MAC, or more recent pipelines such as S2P. Last but not least, our method also produces maps of material parameters on the estimated points, allowing us to simplify building classification or road extraction.

  10. A two-step Hilbert transform method for 2D image reconstruction

    International Nuclear Information System (INIS)

    Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D

    2004-01-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained

  11. Reconstruction of phylogenetic relationships in a highly reticulate group with deep coalescence and recent speciation (Hieracium, Asteraceae).

    Science.gov (United States)

    Krak, K; Caklová, P; Chrtek, J; Fehrer, J

    2013-02-01

    Phylogeny reconstruction based on multiple unlinked markers is often hampered by incongruent gene trees, especially in closely related species complexes with high degrees of hybridization and polyploidy. To investigate the particular strengths and limitations of chloroplast DNA (cpDNA), low-copy nuclear and multicopy nuclear markers for elucidating the evolutionary history of such groups, we focus on Hieracium s.str., a predominantly apomictic genus combining the above-mentioned features. Sequences of the trnV-ndhC and trnT-trnL intergenic spacers were combined for phylogenetic analyses of cpDNA. Part of the highly variable gene for squalene synthase (sqs) was applied as a low-copy nuclear marker. Both gene trees were compared with previous results based on the multicopy external transcribed spacer (ETS) of the nuclear ribosomal DNA. The power of the different markers to detect hybridization varied, but they largely agreed on particular hybrid and allopolyploid origins. The same crown groups of species were recognizable in each dataset, but basal relationships were strongly incongruent among cpDNA, sqs and ETS trees. The ETS tree was considered as the best approximation of the species tree. Both cpDNA and sqs trees showed basal polytomies as well as merging or splitting of species groups of non-hybrid taxa. These patterns can be best explained by a rapid diversification of the genus with ancestral polymorphism and incomplete lineage sorting. A hypothetical scenario of Hieracium speciation based on all available (including non-molecular) evidence is depicted. Incorporation of seemingly contradictory information helped to better understand species origins and evolutionary patterns in this notoriously difficult agamic complex.

  12. The Effects of High-Intensity versus Low-Intensity Resistance Training on Leg Extensor Power and Recovery of Knee Function after ACL-Reconstruction

    Directory of Open Access Journals (Sweden)

    Theresa Bieler

    2014-01-01

    Full Text Available Objective. Persistent weakness is a common problem after anterior cruciate ligament- (ACL- reconstruction. This study investigated the effects of high-intensity (HRT versus low-intensity (LRT resistance training on leg extensor power and recovery of knee function after ACL-reconstruction. Methods. 31 males and 19 females were randomized to HRT (n=24 or LRT (n=26 from week 8–20 after ACL-reconstruction. Leg extensor power, joint laxity, and self-reported knee function were measured before and 7, 14, and 20 weeks after surgery. Hop tests were assessed before and after 20 weeks. Results. Power in the injured leg was 90% (95% CI 86–94% of the noninjured leg, decreasing to 64% (95% CI 60–69% 7 weeks after surgery. During the resistance training phase there was a significant group by time interaction for power (P=0.020. Power was regained more with HRT compared to LRT at week 14 (84% versus 73% of noninjured leg, resp.; P=0.027 and at week 20 (98% versus 83% of noninjured leg, resp.; P=0.006 without adverse effects on joint laxity. No other between-group differences were found. Conclusion. High-intensity resistance training during rehabilitation after ACL-reconstruction can improve muscle power without adverse effects on joint laxity.

  13. Climate Reconstructions

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...

  14. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  15. A variational study on BRDF reconstruction in a structured light scanner

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Stets, Jonathan Dyssel; Lyngby, Rasmus Ahrenkiel

    2017-01-01

    Time-efficient acquisition of reflectance behavior together with surface geometry is a challenging problem. In this study, we investigate the impact of system parameter uncertainties when incorporating a data-driven BRDF reconstruction approach into the standard pipeline of a structured light...... setup. Results show that while uncertainties in vertex positions and normals have a high impact on the quality of reconstructed BRDFs, object geometry and light source properties have very little influence on the reconstructed BRDFs. With this analysis, practitioners now have insight in the tolerances...... required for accurate BRDF acquisition to work....

  16. High-performance blob-based iterative three-dimensional reconstruction in electron tomography using multi-GPUs

    Directory of Open Access Journals (Sweden)

    Wan Xiaohua

    2012-06-01

    Full Text Available Abstract Background Three-dimensional (3D reconstruction in electron tomography (ET has emerged as a leading technique to elucidate the molecular structures of complex biological specimens. Blob-based iterative methods are advantageous reconstruction methods for 3D reconstruction in ET, but demand huge computational costs. Multiple graphic processing units (multi-GPUs offer an affordable platform to meet these demands. However, a synchronous communication scheme between multi-GPUs leads to idle GPU time, and a weighted matrix involved in iterative methods cannot be loaded into GPUs especially for large images due to the limited available memory of GPUs. Results In this paper we propose a multilevel parallel strategy combined with an asynchronous communication scheme and a blob-ELLR data structure to efficiently perform blob-based iterative reconstructions on multi-GPUs. The asynchronous communication scheme is used to minimize the idle GPU time so as to asynchronously overlap communications with computations. The blob-ELLR data structure only needs nearly 1/16 of the storage space in comparison with ELLPACK-R (ELLR data structure and yields significant acceleration. Conclusions Experimental results indicate that the multilevel parallel scheme combined with the asynchronous communication scheme and the blob-ELLR data structure allows efficient implementations of 3D reconstruction in ET on multi-GPUs.

  17. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  18. Hybrid spectral CT reconstruction

    Science.gov (United States)

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  19. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  20. Reconstruction of the High-Osmolarity Glycerol (HOG) Signaling Pathway from the Halophilic Fungus Wallemia ichthyophaga in Saccharomyces cerevisiae.

    Science.gov (United States)

    Konte, Tilen; Terpitz, Ulrich; Plemenitaš, Ana

    2016-01-01

    The basidiomycetous fungus Wallemia ichthyophaga grows between 1.7 and 5.1 M NaCl and is the most halophilic eukaryote described to date. Like other fungi, W. ichthyophaga detects changes in environmental salinity mainly by the evolutionarily conserved high-osmolarity glycerol (HOG) signaling pathway. In Saccharomyces cerevisiae, the HOG pathway has been extensively studied in connection to osmotic regulation, with a valuable knock-out strain collection established. In the present study, we reconstructed the architecture of the HOG pathway of W. ichthyophaga in suitable S. cerevisiae knock-out strains, through heterologous expression of the W. ichthyophaga HOG pathway proteins. Compared to S. cerevisiae, where the Pbs2 (ScPbs2) kinase of the HOG pathway is activated via the SHO1 and SLN1 branches, the interactions between the W. ichthyophaga Pbs2 (WiPbs2) kinase and the W. ichthyophaga SHO1 branch orthologs are not conserved: as well as evidence of poor interactions between the WiSho1 Src-homology 3 (SH3) domain and the WiPbs2 proline-rich motif, the absence of a considerable part of the osmosensing apparatus in the genome of W. ichthyophaga suggests that the SHO1 branch components are not involved in HOG signaling in this halophilic fungus. In contrast, the conserved activation of WiPbs2 by the S. cerevisiae ScSsk2/ScSsk22 kinase and the sensitivity of W. ichthyophaga cells to fludioxonil, emphasize the significance of two-component (SLN1-like) signaling via Group III histidine kinase. Combined with protein modeling data, our study reveals conserved and non-conserved protein interactions in the HOG signaling pathway of W. ichthyophaga and therefore significantly improves the knowledge of hyperosmotic signal processing in this halophilic fungus.