WorldWideScience

Sample records for maximal level reconstructive

  1. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa

    2005-01-01

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  2. Reconstruction of phylogenetic trees of prokaryotes using maximal common intervals.

    Science.gov (United States)

    Heydari, Mahdi; Marashi, Sayed-Amir; Tusserkani, Ruzbeh; Sadeghi, Mehdi

    2014-10-01

    One of the fundamental problems in bioinformatics is phylogenetic tree reconstruction, which can be used for classifying living organisms into different taxonomic clades. The classical approach to this problem is based on a marker such as 16S ribosomal RNA. Since evolutionary events like genomic rearrangements are not included in reconstructions of phylogenetic trees based on single genes, much effort has been made to find other characteristics for phylogenetic reconstruction in recent years. With the increasing availability of completely sequenced genomes, gene order can be considered as a new solution for this problem. In the present work, we applied maximal common intervals (MCIs) in two or more genomes to infer their distance and to reconstruct their evolutionary relationship. Additionally, measures based on uncommon segments (UCS's), i.e., those genomic segments which are not detected as part of any of the MCIs, are also used for phylogenetic tree reconstruction. We applied these two types of measures for reconstructing the phylogenetic tree of 63 prokaryotes with known COG (clusters of orthologous groups) families. Similarity between the MCI-based (resp. UCS-based) reconstructed phylogenetic trees and the phylogenetic tree obtained from NCBI taxonomy browser is as high as 93.1% (resp. 94.9%). We show that in the case of this diverse dataset of prokaryotes, tree reconstruction based on MCI and UCS outperforms most of the currently available methods based on gene orders, including breakpoint distance and DCJ. We additionally tested our new measures on a dataset of 13 closely-related bacteria from the genus Prochlorococcus. In this case, distances like rearrangement distance, breakpoint distance and DCJ proved to be useful, while our new measures are still appropriate for phylogenetic reconstruction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  4. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    International Nuclear Information System (INIS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-01-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design. - Highlights: • This paper proposed WL-MLEM algorithm for PET and demonstrated its performance. • WL-MLEM algorithm effectively combined wobbling and line spread function based MLEM. • WL-MLEM provided improvements in the spatial resolution and the PET image quality. • WL-MLEM can be easily extended to the other iterative

  5. Arctic Sea Level Reconstruction

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde

    Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...

  6. Application of an expectation maximization method to the reconstruction of X-ray-tube spectra from transmission data

    International Nuclear Information System (INIS)

    Endrizzi, M.; Delogu, P.; Oliva, P.

    2014-01-01

    An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available

  7. Confidence and sensitivity of sea-level reconstructions

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde

    For the last two decades, satellite altimetry has provided a near-global view of spatial and temporal patterns in sea surface height (SSH). When combined with records from tide gauges, a historical reconstruction of sea level can be obtained; while tide gauge records span up to 200 years back...... nature of the data fields. We examine the sensitivity of a reconstruction with respect to the length of calibration time series, and the spatial distribution of tide gauges or other proxy data. In addition, we consider the eect of isolating certain physical phenomena (e.g. ENSO) and annual signals...... and modelling these outside the reconstruction. The implementation is currently based on data from compound satellite datasets (i.e., two decades of altimetry), and the Simple Ocean Data Assimilation (SODA) model, an existing reconstruction, where a calibration period can be easily extracted and our model...

  8. Experiments in Reconstructing Twentieth-Century Sea Levels

    Science.gov (United States)

    Ray, Richard D.; Douglas, Bruce C.

    2011-01-01

    One approach to reconstructing historical sea level from the relatively sparse tide-gauge network is to employ Empirical Orthogonal Functions (EOFs) as interpolatory spatial basis functions. The EOFs are determined from independent global data, generally sea-surface heights from either satellite altimetry or a numerical ocean model. The problem is revisited here for sea level since 1900. A new approach to handling the tide-gauge datum problem by direct solution offers possible advantages over the method of integrating sea-level differences, with the potential of eventually adjusting datums into the global terrestrial reference frame. The resulting time series of global mean sea levels appears fairly insensitive to the adopted set of EOFs. In contrast, charts of regional sea level anomalies and trends are very sensitive to the adopted set of EOFs, especially for the sparser network of gauges in the early 20th century. The reconstructions appear especially suspect before 1950 in the tropical Pacific. While this limits some applications of the sea-level reconstructions, the sensitivity does appear adequately captured by formal uncertainties. All our solutions show regional trends over the past five decades to be fairly uniform throughout the global ocean, in contrast to trends observed over the shorter altimeter era. Consistent with several previous estimates, the global sea-level rise since 1900 is 1.70 +/- 0.26 mm/yr. The global trend since 1995 exceeds 3 mm/yr which is consistent with altimeter measurements, but this large trend was possibly also reached between 1935 and 1950.

  9. Computed tomography imaging with the Adaptive Statistical Iterative Reconstruction (ASIR) algorithm: dependence of image quality on the blending level of reconstruction.

    Science.gov (United States)

    Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide

    2018-06-01

    Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.

  10. Multi-level damage identification with response reconstruction

    Science.gov (United States)

    Zhang, Chao-Dong; Xu, You-Lin

    2017-10-01

    Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.

  11. A comparison of maximal torque levels of the different planes of ...

    African Journals Online (AJOL)

    It is often assumed that because different sports require specific skills, the torque levels differ from sport to sport. The purpose of this study was to establish whether there were significant differences in the maximal torque levels of the different planes of movement of the shoulder-girdle complex for different types of sport.

  12. Coastal barrier stratigraphy for Holocene high-resolution sea-level reconstruction.

    Science.gov (United States)

    Costas, Susana; Ferreira, Óscar; Plomaritis, Theocharis A; Leorri, Eduardo

    2016-12-08

    The uncertainties surrounding present and future sea-level rise have revived the debate around sea-level changes through the deglaciation and mid- to late Holocene, from which arises a need for high-quality reconstructions of regional sea level. Here, we explore the stratigraphy of a sandy barrier to identify the best sea-level indicators and provide a new sea-level reconstruction for the central Portuguese coast over the past 6.5 ka. The selected indicators represent morphological features extracted from coastal barrier stratigraphy, beach berm and dune-beach contact. These features were mapped from high-resolution ground penetrating radar images of the subsurface and transformed into sea-level indicators through comparison with modern analogs and a chronology based on optically stimulated luminescence ages. Our reconstructions document a continuous but slow sea-level rise after 6.5 ka with an accumulated change in elevation of about 2 m. In the context of SW Europe, our results show good agreement with previous studies, including the Tagus isostatic model, with minor discrepancies that demand further improvement of regional models. This work reinforces the potential of barrier indicators to accurately reconstruct high-resolution mid- to late Holocene sea-level changes through simple approaches.

  13. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  14. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  15. An investigation of temporal regularization techniques for dynamic PET reconstructions using temporal splines

    International Nuclear Information System (INIS)

    Verhaeghe, Jeroen; D'Asseler, Yves; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace

    2007-01-01

    The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques

  16. Trainability of muscular activity level during maximal voluntary co-contraction: comparison between bodybuilders and nonathletes.

    Directory of Open Access Journals (Sweden)

    Sumiaki Maeo

    Full Text Available Antagonistic muscle pairs cannot be fully activated simultaneously, even with maximal effort, under conditions of voluntary co-contraction, and their muscular activity levels are always below those during agonist contraction with maximal voluntary effort (MVE. Whether the muscular activity level during the task has trainability remains unclear. The present study examined this issue by comparing the muscular activity level during maximal voluntary co-contraction for highly experienced bodybuilders, who frequently perform voluntary co-contraction in their training programs, with that for untrained individuals (nonathletes. The electromyograms (EMGs of biceps brachii and triceps brachii muscles during maximal voluntary co-contraction of elbow flexors and extensors were recorded in 11 male bodybuilders and 10 nonathletes, and normalized to the values obtained during the MVE of agonist contraction for each of the corresponding muscles (% EMGMVE. The involuntary coactivation level in antagonist muscle during the MVE of agonist contraction was also calculated. In both muscles, % EMGMVE values during the co-contraction task for bodybuilders were significantly higher (P<0.01 than those for nonathletes (biceps brachii: 66±14% in bodybuilders vs. 46±13% in nonathletes, triceps brachii: 74±16% vs. 57±9%. There was a significant positive correlation between a length of bodybuilding experience and muscular activity level during the co-contraction task (r = 0.653, P = 0.03. Involuntary antagonist coactivation level during MVE of agonist contraction was not different between the two groups. The current result indicates that long-term participation in voluntary co-contraction training progressively enhances muscular activity during maximal voluntary co-contraction.

  17. Trainability of Muscular Activity Level during Maximal Voluntary Co-Contraction: Comparison between Bodybuilders and Nonathletes

    Science.gov (United States)

    Maeo, Sumiaki; Takahashi, Takumi; Takai, Yohei; Kanehisa, Hiroaki

    2013-01-01

    Antagonistic muscle pairs cannot be fully activated simultaneously, even with maximal effort, under conditions of voluntary co-contraction, and their muscular activity levels are always below those during agonist contraction with maximal voluntary effort (MVE). Whether the muscular activity level during the task has trainability remains unclear. The present study examined this issue by comparing the muscular activity level during maximal voluntary co-contraction for highly experienced bodybuilders, who frequently perform voluntary co-contraction in their training programs, with that for untrained individuals (nonathletes). The electromyograms (EMGs) of biceps brachii and triceps brachii muscles during maximal voluntary co-contraction of elbow flexors and extensors were recorded in 11 male bodybuilders and 10 nonathletes, and normalized to the values obtained during the MVE of agonist contraction for each of the corresponding muscles (% EMGMVE). The involuntary coactivation level in antagonist muscle during the MVE of agonist contraction was also calculated. In both muscles, % EMGMVE values during the co-contraction task for bodybuilders were significantly higher (Pbodybuilders vs. 46±13% in nonathletes, triceps brachii: 74±16% vs. 57±9%). There was a significant positive correlation between a length of bodybuilding experience and muscular activity level during the co-contraction task (r = 0.653, P = 0.03). Involuntary antagonist coactivation level during MVE of agonist contraction was not different between the two groups. The current result indicates that long-term participation in voluntary co-contraction training progressively enhances muscular activity during maximal voluntary co-contraction. PMID:24260233

  18. Avoiding Optimal Mean ℓ2,1-Norm Maximization-Based Robust PCA for Reconstruction.

    Science.gov (United States)

    Luo, Minnan; Nie, Feiping; Chang, Xiaojun; Yang, Yi; Hauptmann, Alexander G; Zheng, Qinghua

    2017-04-01

    Robust principal component analysis (PCA) is one of the most important dimension-reduction techniques for handling high-dimensional data with outliers. However, most of the existing robust PCA presupposes that the mean of the data is zero and incorrectly utilizes the average of data as the optimal mean of robust PCA. In fact, this assumption holds only for the squared [Formula: see text]-norm-based traditional PCA. In this letter, we equivalently reformulate the objective of conventional PCA and learn the optimal projection directions by maximizing the sum of projected difference between each pair of instances based on [Formula: see text]-norm. The proposed method is robust to outliers and also invariant to rotation. More important, the reformulated objective not only automatically avoids the calculation of optimal mean and makes the assumption of centered data unnecessary, but also theoretically connects to the minimization of reconstruction error. To solve the proposed nonsmooth problem, we exploit an efficient optimization algorithm to soften the contributions from outliers by reweighting each data point iteratively. We theoretically analyze the convergence and computational complexity of the proposed algorithm. Extensive experimental results on several benchmark data sets illustrate the effectiveness and superiority of the proposed method.

  19. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  20. Similarity-regulation of OS-EM for accelerated SPECT reconstruction

    Science.gov (United States)

    Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.

    2016-06-01

    Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.

  1. Evaluation of Parallel Level Sets and Bowsher's Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction.

    Science.gov (United States)

    Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan

    2018-02-01

    In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.

  2. Principles and reconstruction of the ancient sea levels during the Quaternary

    International Nuclear Information System (INIS)

    Martin, L.; Flexor, J.M.; Suguio, K.

    1986-01-01

    This work focused the multiple aspects related to the ''reconstruction of the ancient sea level during the Quaternary''. The relative sea level, fluctuations are produced by true variations of the level (eustasy) and by changes in the land level (tectonism and isostasy). The changes of the relative levels are reconstructed through several evidence of these fluctuations, which are recognised in time and space. To define their situation in space is necessary to know their present altitude in relation to their original altitude, that is, to determine their position in relation to the sea level during their formation or sedimentation. Their situation in time is determined by measuring the moment of their formation or sedimentation, using for this the dating methods (isotopic, archeological, etc.) When numerous ancient levels could be reconstructed, spread through a considerable time interval, is possible to delineate the sea level fluctuation curve for this period. (C.D.G.) [pt

  3. Maximizing results for lipofilling in facial reconstruction.

    Science.gov (United States)

    Barret, Juan P; Sarobe, Neus; Grande, Nelida; Vila, Delia; Palacin, Jose M

    2009-07-01

    Lipostructure (also known as structural fat grafts, lipofilling, or fat grafting) has become a technique with a good reputation and reproducible results. The application of this technology in patients undergoing reconstruction is a novel surgical alternative. Obtaining good results in this patient population is very difficult, but the application of small fat grafts with a strict Coleman technique produces long-term cosmetic effects. Adult-derived stem cells have been pointed out as important effectors of this regenerative technology, and future research should focus in this direction.

  4. Sea level reconstructions from altimetry and tide gauges using independent component analysis

    Science.gov (United States)

    Brunnabend, Sandra-Esther; Kusche, Jürgen; Forootan, Ehsan

    2017-04-01

    Many reconstructions of global and regional sea level rise derived from tide gauges and satellite altimetry used the method of empirical orthogonal functions (EOF) to reduce noise, improving the spatial resolution of the reconstructed outputs and investigate the different signals in climate time series. However, the second order EOF method has some limitations, e.g. in the separation of individual physical signals into different modes of sea level variations and in the capability to physically interpret the different modes as they are assumed to be orthogonal. Therefore, we investigate the use of the more advanced statistical signal decomposition technique called independent component analysis (ICA) to reconstruct global and regional sea level change from satellite altimetry and tide gauge records. Our results indicate that the used method has almost no influence on the reconstruction of global mean sea level change (1.6 mm/yr from 1960-2010 and 2.9 mm/yr from 1993-2013). Only different numbers of modes are needed for the reconstruction. Using the ICA method is advantageous for separating independent climate variability signals from regional sea level variations as the mixing problem of the EOF method is strongly reduced. As an example, the modes most dominated by the El Niño-Southern Oscillation (ENSO) signal are compared. Regional sea level changes near Tianjin, China, Los Angeles, USA, and Majuro, Marshall Islands are reconstructed and the contributions from ENSO are identified.

  5. Continuous sea-level reconstructions beyond the Pleistocene: improving the Mediterranean sea-level method

    Science.gov (United States)

    Grant, K.; Rohling, E. J.; Amies, J.

    2017-12-01

    Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.

  6. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  7. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  8. Titanium template for scaphoid reconstruction.

    Science.gov (United States)

    Haefeli, M; Schaefer, D J; Schumacher, R; Müller-Gerbl, M; Honigmann, P

    2015-06-01

    Reconstruction of a non-united scaphoid with a humpback deformity involves resection of the non-union followed by bone grafting and fixation of the fragments. Intraoperative control of the reconstruction is difficult owing to the complex three-dimensional shape of the scaphoid and the other carpal bones overlying the scaphoid on lateral radiographs. We developed a titanium template that fits exactly to the surfaces of the proximal and distal scaphoid poles to define their position relative to each other after resection of the non-union. The templates were designed on three-dimensional computed tomography reconstructions and manufactured using selective laser melting technology. Ten conserved human wrists were used to simulate the reconstruction. The achieved precision measured as the deviation of the surface of the reconstructed scaphoid from its virtual counterpart was good in five cases (maximal difference 1.5 mm), moderate in one case (maximal difference 3 mm) and inadequate in four cases (difference more than 3 mm). The main problems were attributed to the template design and can be avoided by improved pre-operative planning, as shown in a clinical case. © The Author(s) 2014.

  9. Stable reconstruction of Arctic sea level for the 1950-2010 period

    OpenAIRE

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2016-01-01

    Reconstruction of historical Arctic sea level is generally difficult due to the limited coverage and quality of both tide gauge and altimetry data in the area. Here a strategy to achieve a stable and plausible reconstruction of Arctic sea level from 1950 to today is presented. This work is based on the combination of tide gauge records and a new 20-year reprocessed satellite altimetry derived sea level pattern. Hence the study is limited to the area covered by satellite altimetry (68ºN and 82...

  10. Statistical selection of tide gauges for Arctic sea-level reconstruction

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2015-01-01

    In this paper, we seek an appropriate selection of tide gauges for Arctic Ocean sea-level reconstruction based on a combination of empirical criteria and statistical properties (leverages). Tide gauges provide the only in situ observations of sea level prior to the altimetry era. However, tide...... the "influence" of each Arctic tide gauge on the EOF-based reconstruction through the use of statistical leverage and use this as an indication in selecting appropriate tide gauges, in order to procedurally identify poor-quality data while still including as much data as possible. To accommodate sparse...

  11. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms.

    Science.gov (United States)

    Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G

    2017-11-01

    Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV

  12. Entropy and transverse section reconstruction

    International Nuclear Information System (INIS)

    Gullberg, G.T.

    1976-01-01

    A new approach to the reconstruction of a transverse section using projection data from multiple views incorporates the concept of maximum entropy. The principle of maximizing information entropy embodies the assurance of minimizing bias or prejudice in the reconstruction. Using maximum entropy is a necessary condition for the reconstructed image. This entropy criterion is most appropriate for 3-D reconstruction of objects from projections where the system is underdetermined or the data are limited statistically. This is the case in nuclear medicine time limitations in patient studies do not yield sufficient projections

  13. Evaluation of reconstruction techniques in regional cerebral blood flow SPECT using trade-off plots: a Monte Carlo study.

    Science.gov (United States)

    Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha

    2007-09-01

    The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.

  14. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  15. Interval-based reconstruction for uncertainty quantification in PET

    Science.gov (United States)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  16. Reconstructing Northern Hemisphere upper-level fields during World War II

    Energy Technology Data Exchange (ETDEWEB)

    Broennimann, S. [Lunar and Planetary Laboratory, University of Arizona, PO Box 210092, Tucson, AZ 85721-0092 (United States); Luterbacher, J. [Institute of Geography, University of Bern, Bern (Switzerland); NCCR Climate, University of Bern, Bern (Switzerland)

    2004-05-01

    Monthly mean fields of temperature and geopotential height (GPH) from 700 to 100 hPa were statistically reconstructed for the extratropical Northern Hemisphere for the World War II period. The reconstruction was based on several hundred predictor variables, comprising temperature series from meteorological stations and gridded sea level pressure data (1939-1947) as well as a large amount of historical upper-air data (1939-1944). Statistical models were fitted in a calibration period (1948-1994) using the NCEP/NCAR Reanalysis data set as predictand. The procedure consists of a weighting scheme, principal component analyses on both the predictor variables and the predictand fields and multiple regression models relating the two sets of principal component time series to each other. According to validation experiments, the reconstruction skill in the 1939-1944 period is excellent for GPH at all levels and good for temperature up to 500 hPa, but somewhat worse for 300 hPa temperature and clearly worse for 100 hPa temperature. Regionally, high predictive skill is found over the midlatitudes of Europe and North America, but a lower quality over Asia, the subtropics, and the Arctic. Moreover, the quality is considerably better in winter than in summer. In the 1945-1947 period, reconstructions are useful up to 300 hPa for GPH and, in winter, up to 500 hPa for temperature. The reconstructed fields are presented for selected months and analysed from a dynamical perspective. It is demonstrated that the reconstructions provide a useful tool for the analysis of large-scale circulation features as well as stratosphere-troposphere coupling in the late 1930s and early 1940s. (orig.)

  17. Processing for maximizing the level of crystallinity in linear aromatic polyimides

    Science.gov (United States)

    St.clair, Terry L. (Inventor)

    1991-01-01

    The process of the present invention includes first treating a polyamide acid (such as LARC-TPI polyamide acid) in an amide-containing solvent (such as N-methyl pyrrolidone) with an aprotic organic base (such as triethylamine), followed by dehydrating with an organic dehydrating agent (such as acetic anhydride). The level of crystallinity in the linear aromatic polyimide so produced is maximized without any degradation in the molecular weight thereof.

  18. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    International Nuclear Information System (INIS)

    Jian, Y; Carson, R E; Planeta, B

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([ 11 C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. (paper)

  19. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  20. Clinical evaluation of iterative reconstruction (ordered-subset expectation maximization) in dynamic positron emission tomography: quantitative effects on kinetic modeling with N-13 ammonia in healthy subjects

    DEFF Research Database (Denmark)

    Hove, Jens D; Rasmussen, Rune; Freiberg, Jacob

    2008-01-01

    BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron...... emission tomography (PET) studies from 20 normal volunteers at rest and during dipyridamole stimulation were analyzed. Image data were reconstructed with either FBP or OSEM. FBP- and OSEM-derived input functions and tissue curves were compared together with the myocardial blood flow and spillover values...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...

  1. EM for phylogenetic topology reconstruction on nonhomogeneous data.

    Science.gov (United States)

    Ibáñez-Marcelo, Esther; Casanellas, Marta

    2014-06-17

    The reconstruction of the phylogenetic tree topology of four taxa is, still nowadays, one of the main challenges in phylogenetics. Its difficulties lie in considering not too restrictive evolutionary models, and correctly dealing with the long-branch attraction problem. The correct reconstruction of 4-taxon trees is crucial for making quartet-based methods work and being able to recover large phylogenies. We adapt the well known expectation-maximization algorithm to evolutionary Markov models on phylogenetic 4-taxon trees. We then use this algorithm to estimate the substitution parameters, compute the corresponding likelihood, and to infer the most likely quartet. In this paper we consider an expectation-maximization method for maximizing the likelihood of (time nonhomogeneous) evolutionary Markov models on trees. We study its success on reconstructing 4-taxon topologies and its performance as input method in quartet-based phylogenetic reconstruction methods such as QFIT and QuartetSuite. Our results show that the method proposed here outperforms neighbor-joining and the usual (time-homogeneous continuous-time) maximum likelihood methods on 4-leaved trees with among-lineage instantaneous rate heterogeneity, and perform similarly to usual continuous-time maximum-likelihood when data satisfies the assumptions of both methods. The method presented in this paper is well suited for reconstructing the topology of any number of taxa via quartet-based methods and is highly accurate, specially regarding largely divergent trees and time nonhomogeneous data.

  2. The constrained maximal expression level owing to haploidy shapes gene content on the mammalian X chromosome

    DEFF Research Database (Denmark)

    Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R R

    2015-01-01

    that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression profiles of X-linked genes. Tissues whose tissue-specific genes are very highly expressed (e.g., secretory tissues, tissues...... abundant in structural proteins) are also tissues in which gene expression is relatively rare on the X chromosome. These trends cannot be fully accounted for in terms of alternative models of biased expression. In conclusion, the notion that it is hard for genes on the Therian X to be highly expressed...

  3. Arctic sea-level reconstruction analysis using recent satellite altimetry

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2014-01-01

    We present a sea-level reconstruction for the Arctic Ocean using recent satellite altimetry data. The model, forced by historical tide gauge data, is based on empirical orthogonal functions (EOFs) from a calibration period; for this purpose, newly retracked satellite altimetry from ERS-1 and -2...... and Envisat has been used. Despite the limited coverage of these datasets, we have made a reconstruction up to 82 degrees north for the period 1950–2010. We place particular emphasis on determining appropriate preprocessing for the tide gauge data, and on validation of the model, including the ability...

  4. Quantitative SPECT reconstruction of iodine-123 data

    International Nuclear Information System (INIS)

    Gilland, D.R.; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1991-01-01

    Many clinical and research studies in nuclear medicine require quantitation of iodine-123 ( 123 I) distribution for the determination of kinetics or localization. The objective of this study was to implement several reconstruction methods designed for single-photon emission computed tomography (SPECT) using 123 I and to evaluate their performance in terms of quantitative accuracy, image artifacts, and noise. The methods consisted of four attenuation and scatter compensation schemes incorporated into both the filtered backprojection/Chang (FBP) and maximum likelihood-expectation maximization (ML-EM) reconstruction algorithms. The methods were evaluated on data acquired of a phantom containing a hot sphere of 123 I activity in a lower level background 123 I distribution and nonuniform density media. For both reconstruction algorithms, nonuniform attenuation compensation combined with either scatter subtraction or Metz filtering produced images that were quantitatively accurate to within 15% of the true value. The ML-EM algorithm demonstrated quantitative accuracy comparable to FBP and smaller relative noise magnitude for all compensation schemes

  5. Sub-maximal and maximal Yo-Yo intermittent endurance test level 2: heart rate response, reproducibility and application to elite soccer

    DEFF Research Database (Denmark)

    Bradley, Paul S; Mohr, Magni; Bendiksen, Mads

    2011-01-01

    to detect test-retest changes and discriminate between performance for different playing standards and positions in elite soccer. Elite (n = 148) and sub-elite male (n = 14) soccer players carried out the Yo-Yo IE2 test on several occasions over consecutive seasons. Test-retest coefficient of variation (CV......) in Yo-Yo IE2 test performance and heart rate after 6 min were 3.9% (n = 37) and 1.4% (n = 32), respectively. Elite male senior and youth U19 players Yo-Yo IE2 performances were better (P ......The aims of this study were to (1) determine the reproducibility of sub-maximal and maximal versions of the Yo-Yo intermittent endurance test level 2 (Yo-Yo IE2 test), (2) assess the relationship between the Yo-Yo IE2 test and match performance and (3) quantify the sensitivity of the Yo-Yo IE2 test...

  6. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  7. A practical algorithm for reconstructing level-1 phylogenetic networks

    NARCIS (Netherlands)

    Huber, K.T.; Iersel, van L.J.J.; Kelk, S.M.; Suchecki, R.

    2011-01-01

    Recently, much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here, we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks-a type of network

  8. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  9. A maximal incremental effort alters tear osmolarity depending on the fitness level in military helicopter pilots.

    Science.gov (United States)

    Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David

    2017-10-01

    Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. The Red Sea during the Last Glacial Maximum: implications for sea level reconstructions

    Science.gov (United States)

    Gildor, H.; Biton, E.; Peltier, W. R.

    2006-12-01

    The Red Sea (RS) is a semi-enclosed basin connected to the Indian Ocean via a narrow and shallow strait, and surrounded by arid areas which exhibits high sensitivity to atmospheric changes and sea level reduction. We have used the MIT GCM to investigate the changes in the hydrography and circulation in the RS in response to reduced sea level, variability in the Indian monsoons, and changes in atmospheric temperature and humidity that occurred during the Last Glacial Maximum (LGM). The model results show high sensitivity to sea level reduction especially in the salinity field (increasing with the reduction in sea level) together with a mild atmospheric impact. Sea level reduction decreases the stratification, increases subsurface temperatures, and alters the circulation pattern at the Strait of Bab el Mandab, which experiences a transition from submaximal flow to maximal flow. The reduction in sea level at LGM alters the location of deep water formation which shifts to an open sea convective site in the northern part of the RS compared to present day situation in which deep water is formed from the Gulf of Suez outflow. Our main result based on both the GCM and on a simple hydraulic control model which takes into account mixing process at the Strait of Bab El Mandeb, is that sea level was reduced by only ~100 m in the Bab El Mandeb region during the LGM, i.e. the water depth at the Hanish sill (the shallowest part in the Strait Bab el Mandab) was around 34 m. This result agrees with the recent reconstruction of the LGM low stand of the sea in this region based upon the ICE-5G (VM2) model of Peltier (2004).

  12. Boomerang flap reconstruction for the breast.

    Science.gov (United States)

    Baumholtz, Michael A; Al-Shunnar, Buthainah M; Dabb, Richard W

    2002-07-01

    The boomerang-shaped latissimus dorsi musculocutaneous flap for breast reconstruction offers a stable platform for breast reconstruction. It allows for maximal aesthetic results with minimal complications. The authors describe a skin paddle to obtain a larger volume than either the traditional elliptical skin paddle or the extended latissimus flap. There are three specific advantages to the boomerang design: large volume, conical shape (often lacking in the traditional skin paddle), and an acceptable donor scar. Thirty-eight flaps were performed. No reconstruction interfered with patient's ongoing oncological regimen. The most common complication was seroma, which is consistent with other latissimus reconstructions.

  13. Sea level reconstruction from satellite altimetry and tide gauge data

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2012-01-01

    Ocean satellite altimetry has provided global sets of sea level data for the last two decades, allowing determination of spatial patterns in global sea level. For reconstructions going back further than this period, tide gauge data can be used as a proxy. We examine different methods of combining...... for better sensitivity analysis with respect to spatial distribution, and tide gauge data are available around the Arctic Ocean, which may be important for a later high-latitude reconstruction....... satellite altimetry and tide gauge data using optimal weighting of tide gauge data, linear regression and EOFs, including automatic quality checks of the tide gauge time series. We attempt to augment the model using various proxies such as climate indices like the NAO and PDO, and investigate alternative...

  14. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  15. 60-year Nordic and arctic sea level reconstruction based on a reprocessed two decade altimetric sea level record and tide gauges

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry...

  16. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  17. Major inter-personal variation in the increase and maximal level of 25-hydroxy vitamin D induced by UVB

    DEFF Research Database (Denmark)

    Datta, Pameli; Philipsen, Peter A.; Olsen, Peter

    2016-01-01

    Vitamin D influences skeletal health as well as other aspects of human health. Even when the most obvious sources of variation such as solar UVB exposure, latitude, season, clothing habits, skin pigmentation and ethnicity are selected for, variation in the serum 25-hydroxy vitamin D (25(OH......)D) response to UVB remains extensive and unexplained. Our study assessed the inter-personal variation in 25(OH)D response to UVR and the maximal obtainable 25(OH)D level in 22 healthy participants (220 samples) with similar skin pigmentation during winter with negligible ambient UVB. Participants received...... identical UVB doses on identical body areas until a maximal level of 25(OH)D was reached. Major inter-personal variation in both the maximal obtainable UVB-induced 25(OH)D level (range 85–216 nmol l−1, mean 134 nmol l−1) and the total increase in 25(OH)D (range 3–139 nmol l−1, mean 48 nmol l−1) was found...

  18. A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.

    1994-01-01

    A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented

  19. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Energy Technology Data Exchange (ETDEWEB)

    Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  20. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    International Nuclear Information System (INIS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-01-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm

  1. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Science.gov (United States)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  2. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters.

    Science.gov (United States)

    Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert

    2010-10-01

    Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor

  3. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    International Nuclear Information System (INIS)

    Galavis, Paulina E.; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert; Hollensen, Christian

    2010-01-01

    Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [ 18 F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range = 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% = range = 25%) were entropy-GLCM, sum entropy, high gray level run emphasis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be

  4. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome

    KAUST Repository

    Hurst, Laurence D.

    2015-12-18

    X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression

  5. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome.

    Directory of Open Access Journals (Sweden)

    Laurence D Hurst

    2015-12-01

    Full Text Available X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE and data from the Functional Annotation of the Mammalian Genome (FANTOM5 project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds, as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased

  6. Anterior cruciate ligament graft tensioning. Is the maximal sustained one-handed pull technique reproducible?

    Directory of Open Access Journals (Sweden)

    Hirpara Kieran M

    2011-07-01

    Full Text Available Abstract Background Tensioning of anterior cruciate ligament (ACL reconstruction grafts affects the clinical outcome of the procedure. As yet, no consensus has been reached regarding the optimum initial tension in an ACL graft. Most surgeons rely on the maximal sustained one-handed pull technique for graft tension. We aim to determine if this technique is reproducible from patient to patient. Findings We created a device to simulate ACL reconstruction surgery using Ilizarov components and porcine flexor tendons. Six experienced ACL reconstruction surgeons volunteered to tension porcine grafts using the device to see if they could produce a consistent tension. None of the surgeons involved were able to accurately reproduce graft tension over a series of repeat trials. Conclusions We conclude that the maximal sustained one-handed pull technique of ACL graft tensioning is not reproducible from trial to trial. We also conclude that the initial tension placed on an ACL graft varies from surgeon to surgeon.

  7. Anterior cruciate ligament graft tensioning. Is the maximal sustained one-handed pull technique reproducible?

    LENUS (Irish Health Repository)

    O'Neill, Barry J

    2011-07-20

    Abstract Background Tensioning of anterior cruciate ligament (ACL) reconstruction grafts affects the clinical outcome of the procedure. As yet, no consensus has been reached regarding the optimum initial tension in an ACL graft. Most surgeons rely on the maximal sustained one-handed pull technique for graft tension. We aim to determine if this technique is reproducible from patient to patient. Findings We created a device to simulate ACL reconstruction surgery using Ilizarov components and porcine flexor tendons. Six experienced ACL reconstruction surgeons volunteered to tension porcine grafts using the device to see if they could produce a consistent tension. None of the surgeons involved were able to accurately reproduce graft tension over a series of repeat trials. Conclusions We conclude that the maximal sustained one-handed pull technique of ACL graft tensioning is not reproducible from trial to trial. We also conclude that the initial tension placed on an ACL graft varies from surgeon to surgeon.

  8. 60-year Nordic and arctic sea level reconstruction based on a reprocessed two decade altimetric sea level record and tide gauges

    OpenAIRE

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2015-01-01

    Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry calibration record into EOFs, and fitting these patterns to a historical tide gauge record.

  9. Divide and conquer: intermediate levels of population fragmentation maximize cultural accumulation.

    Science.gov (United States)

    Derex, Maxime; Perreault, Charles; Boyd, Robert

    2018-04-05

    Identifying the determinants of cumulative cultural evolution is a key issue in the interdisciplinary field of cultural evolution. A widely held view is that large and well-connected social networks facilitate cumulative cultural evolution because they promote the spread of useful cultural traits and prevent the loss of cultural knowledge through factors such as drift. This view stems from models that focus on the transmission of cultural information, without considering how new cultural traits actually arise. In this paper, we review the literature from various fields that suggest that, under some circumstances, increased connectedness can decrease cultural diversity and reduce innovation rates. Incorporating this idea into an agent-based model, we explore the effect of population fragmentation on cumulative culture and show that, for a given population size, there exists an intermediate level of population fragmentation that maximizes the rate of cumulative cultural evolution. This result is explained by the fact that fully connected, non-fragmented populations are able to maintain complex cultural traits but produce insufficient variation and so lack the cultural diversity required to produce highly complex cultural traits. Conversely, highly fragmented populations produce a variety of cultural traits but cannot maintain complex ones. In populations with intermediate levels of fragmentation, cultural loss and cultural diversity are balanced in a way that maximizes cultural complexity. Our results suggest that population structure needs to be taken into account when investigating the relationship between demography and cumulative culture.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).

  10. Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.

    Science.gov (United States)

    Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M

    2016-04-01

    To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.

  11. A heuristic statistical stopping rule for iterative reconstruction in emission tomography

    International Nuclear Information System (INIS)

    Ben Bouallegue, F.; Mariano-Goulart, D.; Crouzet, J.F.

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for maximum likelihood expectation maximization (MLEM) reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the Geant4 application in emission tomography (GATE) platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time. (author)

  12. Low levels of maximal aerobic power impair the profile of mood state in individuals with temporal lobe epilepsy

    Directory of Open Access Journals (Sweden)

    Rodrigo Luiz Vancini

    2015-01-01

    Full Text Available Objective To investigate the correlation between cardiorespiratory fitness and mood state in individuals with temporal lobe epilepsy (TLE. Method Individuals with TLE (n = 20 and healthy control subjects (C, n = 20 were evaluated. Self-rating questionnaires were used to assess mood (POMS and habitual physical activity (BAECKE. Cardiorespiratory fitness was evaluated by a maximal incremental test. Results People with TLE presented lower cardiorespiratory fitness; higher levels of mood disorders; and lower levels of vigor when compared to control health subjects. A significant negative correlation was observed between the levels of tension-anxiety and maximal aerobic power. Conclusion Low levels of cardiorespiratory fitness may modify the health status of individuals with TLE and it may be considered a risk factor for the development of mood disorders.

  13. A Novel Kernel-Based Regularization Technique for PET Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Abdelwahhab Boudjelal

    2017-06-01

    Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.

  14. Effects of maximal doses of atorvastatin versus rosuvastatin on small dense low-density lipoprotein cholesterol levels

    Science.gov (United States)

    Maximal doses of atorvastatin and rosuvastatin are highly effective in lowering low-density lipoprotein (LDL) cholesterol and triglyceride levels; however, rosuvastatin has been shown to be significantly more effective than atorvastatin in lowering LDL cholesterol and in increasing high-density lipo...

  15. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  16. Reconstructing Common Era relative sea-level change on the Gulf Coast of Florida

    Science.gov (United States)

    Gerlach, Matthew J.; Engelhart, Simon E.; Kemp, Andrew C.; Moyer, Ryan P.; Smoak, Joseph M.; Bernhardt, Christopher E.; Cahill, Niamh

    2017-01-01

    To address a paucity of Common Era data in the Gulf of Mexico, we reconstructed ~ 1.1 m of relative sea-level (RSL) rise over the past ~ 2000 years at Little Manatee River (Gulf Coast of Florida, USA). We applied a regional-scale foraminiferal transfer function to fossil assemblages preserved in a core of salt-marsh peat and organic silt that was dated using radiocarbon and recognition of pollution, 137Cs and pollen chronohorizons. Our proxy reconstruction was combined with tide-gauge data from four nearby sites spanning 1913–2014 CE. Application of an Errors-in-Variables Integrated Gaussian Process (EIV-IGP) model to the combined proxy and instrumental dataset demonstrates that RSL fell from ~ 350 to 100 BCE, before rising continuously to present. This initial RSL fall was likely the result of local-scale processes (e.g., silting up of a tidal flat or shallow sub-tidal shoal) as salt-marsh development at the site began. Since ~ 0 CE, we consider the reconstruction to be representative of regional-scale RSL trends. We removed a linear rate of 0.3 mm/yr from the RSL record using the EIV-IGP model to estimate climate-driven sea-level trends and to facilitate comparison among sites. This analysis demonstrates that since ~ 0 CE sea level did not deviate significantly from zero until accelerating continuously from ~ 1500 CE to present. Sea level was rising at 1.33 mm/yr in 1900 CE and accelerated until 2014 CE when a rate of 2.02 mm/yr was attained, which is the fastest, century-scale trend in the ~ 2000-year record. Comparison to existing reconstructions from the Gulf coast of Louisiana and the Atlantic coast of northern Florida reveal similar sea-level histories at all three sites. We explored the influence of compaction and fluvial processes on our reconstruction and concluded that compaction was likely insignificant. Fluvial processes were also likely insignificant, but further proxy evidence is needed to fully test this hypothesis. Our results

  17. The Constrained Maximal Expression Level Owing to Haploidy Shapes Gene Content on the Mammalian X Chromosome

    KAUST Repository

    Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz

    2015-01-01

    to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia

  18. Reconstructing sea level from paleo and projected temperatures 200 to 2100 AD

    DEFF Research Database (Denmark)

    Grinsted, Aslak; Moore, John; Jevrejeva, Svetlana

    2010-01-01

    -proxy reconstructions assuming that the established relationship between temperature and sea level holds from 200 to 2100 ad. Over the last 2,000 years minimum sea level (-19 to -26 cm) occurred around 1730 ad, maximum sea level (12–21 cm) around 1150 AD. Sea level 2090–2099 is projected to be 0.9 to 1.3 m for the A1B...

  19. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  20. Exploring mechanisms of compaction in salt-marsh sediments using Common Era relative sea-level reconstructions

    Science.gov (United States)

    Brain, Matthew J.; Kemp, Andrew C.; Hawkes, Andrea D.; Engelhart, Simon E.; Vane, Christopher H.; Cahill, Niamh; Hill, Troy D.; Donnelly, Jeffrey P.; Horton, Benjamin P.

    2017-07-01

    Salt-marsh sediments provide precise and near-continuous reconstructions of Common Era relative sea level (RSL). However, organic and low-density salt-marsh sediments are prone to compaction processes that cause post-depositional distortion of the stratigraphic column used to reconstruct RSL. We compared two RSL reconstructions from East River Marsh (Connecticut, USA) to assess the contribution of mechanical compression and biodegradation to compaction of salt-marsh sediments and their subsequent influence on RSL reconstructions. The first, existing reconstruction ('trench') was produced from a continuous sequence of basal salt-marsh sediment and is unaffected by compaction. The second, new reconstruction is from a compaction-susceptible core taken at the same location. We highlight that sediment compaction is the only feasible mechanism for explaining the observed differences in RSL reconstructed from the trench and core. Both reconstructions display long-term RSL rise of ∼1 mm/yr, followed by a ∼19th Century acceleration to ∼3 mm/yr. A statistically-significant difference between the records at ∼1100 to 1800 CE could not be explained by a compression-only geotechnical model. We suggest that the warmer and drier conditions of the Medieval Climate Anomaly (MCA) resulted in an increase in sediment compressibility during this time period. We adapted the geotechnical model by reducing the compressive strength of MCA sediments to simulate this softening of sediments. 'Decompaction' of the core reconstruction with this modified model accounted for the difference between the two RSL reconstructions. Our results demonstrate that compression-only geotechnical models may be inadequate for estimating compaction and post-depositional lowering of susceptible organic salt-marsh sediments in some settings. This has important implications for our understanding of the drivers of sea-level change. Further, our results suggest that future climate changes may make salt

  1. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  3. Adaptive multiresolution method for MAP reconstruction in electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)

    2016-11-15

    3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.

  4. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  5. Maximizing the retention level for proportional reinsurance under  -regulation of the finite time surplus process with unit-equalized interarrival time

    Directory of Open Access Journals (Sweden)

    Sukanya Somprom

    2016-07-01

    Full Text Available The research focuses on an insurance model controlled by proportional reinsurance in the finite-time surplus process with a unit-equalized time interval. We prove the existence of the maximal retention level for independent and identically distributed claim processes under α-regulation, i.e., a model where the insurance company has to manage the probability of insolvency to be at most α. In addition, we illustrate the maximal retention level for exponential claims by applying the bisection technique.

  6. Reconstructing Mid- to Late Holocene Sea-Level Change from Coral Microatolls, French Polynesia

    Science.gov (United States)

    Hallmann, N.; Camoin, G.; Eisenhauer, A.; Vella, C.; Samankassou, E.; Botella, A.; Milne, G. A.; Pothin, V.; Dussouillez, P.; Fleury, J.

    2017-12-01

    Coral microatolls are sensitive low-tide recorders, as their vertical accretion is limited by the mean low water springs level, and can be considered therefore as high-precision recorders of sea-level change. They are of pivotal importance to resolving the rates and amplitudes of millennial-to-century scale changes during periods of relative climate stability such as the Mid- to Late Holocene, which serves as an important baseline of natural variability prior to the Anthropocene. It provides therefore a unique opportunity to study coastal response to sea-level rise, even if the rates of sea-level rise during the Mid- to Late Holocene were lower than the current rates and those expected in the near future. Mid- to Late Holocene relative sea-level changes in French Polynesia encompassing the last 6,000 years were reconstructed based on the coupling between absolute U/Th dating of in situ coral microatolls and their precise positioning via GPS RTK (Real Time Kinematic) measurements. The twelve studied islands represent ideal settings for accurate sea-level studies because: 1) they can be regarded as tectonically stable during the relevant period (slow subsidence), 2) they are located far from former ice sheets (far-field), 3) they are characterized by a low tidal amplitude, and 4) they cover a wide range of latitudes which produces significantly improved constraints on GIA (Glacial Isostatic Adjustment) model parameters. A sea-level rise of less than 1 m is recorded between 6 and 3-3.5 ka, and is followed by a gradual fall in sea level that started around 2.5 ka and persisted until the past few centuries. In addition, growth pattern analysis of coral microatolls allows the reconstruction of low-amplitude, high-frequency sea-level change on centennial to sub-decadal time scales. The reconstructed sea-level curve extends the Tahiti last deglacial sea-level curve [Deschamps et al., 2012, Nature, 483, 559-564], and is in good agreement with a geophysical model tuned to

  7. Leiomyosarcoma of the inferior vena cava level II involvement: curative resection and reconstruction of renal veins

    Directory of Open Access Journals (Sweden)

    Wang Quan

    2012-06-01

    Full Text Available Abstract Leiomyosarcoma of the inferior vena cava (IVCL is a rare retroperitoneal tumor. We report two cases of level II (middle level, renal veins to hepatic veins IVCL, who underwent en bloc resection with reconstruction of bilateral or left renal venous return using prosthetic grafts. In our cases, IVCL is documented to be occluded preoperatively, therefore, radical resection of tumor and/or right kidney was performed and the distal end of inferior vena cava was resected and without caval reconstruction. None of the patients developed edema or acute renal failure postoperatively. After surgical resection, adjuvant radiation therapy was administrated. The patients have been free of recurrence 2 years and 3 months, 9 months after surgery, respectively, indicating the complete surgical resection and radiotherapy contribute to the better survival. The reconstruction of inferior vena cava was not considered mandatory in level II IVCL, if the retroperitoneal venous collateral pathways have been established. In addition to the curative resection of IVCL, the renal vascular reconstruction minimized the risks of procedure-related acute renal failure, and was more physiologically preferable. This concept was reflected in the treatment of the two patients reported on.

  8. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Directory of Open Access Journals (Sweden)

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  9. Does the graft-tunnel friction influence knee joint kinematics and biomechanics after anterior cruciate ligament reconstruction? A finite element study.

    Science.gov (United States)

    Wan, Chao; Hao, Zhixiu

    2018-02-01

    Graft tissues within bone tunnels remain mobile for a long time after anterior cruciate ligament (ACL) reconstruction. However, whether the graft-tunnel friction affects the finite element (FE) simulation of the ACL reconstruction is still unclear. Four friction coefficients (from 0 to 0.3) were simulated in the ACL-reconstructed joint model as well as two loading levels of anterior tibial drawer. The graft-tunnel friction did not affect joint kinematics and the maximal principal strain of the graft. By contrast, both the relative graft-tunnel motion and equivalent strain for the bone tunnels were altered, which corresponded to different processes of graft-tunnel integration and bone remodeling, respectively. It implies that the graft-tunnel friction should be defined properly for studying the graft-tunnel integration or bone remodeling after ACL reconstruction using numerical simulation.

  10. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral

  11. Org.Lcsim: Event Reconstruction in Java

    International Nuclear Information System (INIS)

    Graf, Norman

    2011-01-01

    Maximizing the physics performance of detectors being designed for the International Linear Collider, while remaining sensitive to cost constraints, requires a powerful, efficient, and flexible simulation, reconstruction and analysis environment to study the capabilities of a large number of different detector designs. The preparation of Letters Of Intent for the International Linear Collider involved the detailed study of dozens of detector options, layouts and readout technologies; the final physics benchmarking studies required the reconstruction and analysis of hundreds of millions of events. We describe the Java-based software toolkit (org.lcsim) which was used for full event reconstruction and analysis. The components are fully modular and are available for tasks from digitization of tracking detector signals through to cluster finding, pattern recognition, track-fitting, calorimeter clustering, individual particle reconstruction, jet-finding, and analysis. The detector is defined by the same xml input files used for the detector response simulation, ensuring the simulation and reconstruction geometries are always commensurate by construction. We discuss the architecture as well as the performance.

  12. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies

    Science.gov (United States)

    Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong

    2017-05-01

    side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.

  13. Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies

    Science.gov (United States)

    Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong

    2017-01-01

    side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843

  14. Simultaneous reconstruction, segmentation, and edge enhancement of relatively piecewise continuous images with intensity-level information

    International Nuclear Information System (INIS)

    Liang, Z.; Jaszczak, R.; Coleman, R.; Johnson, V.

    1991-01-01

    A multinomial image model is proposed which uses intensity-level information for reconstruction of contiguous image regions. The intensity-level information assumes that image intensities are relatively constant within contiguous regions over the image-pixel array and that intensity levels of these regions are determined either empirically or theoretically by information criteria. These conditions may be valid, for example, for cardiac blood-pool imaging, where the intensity levels (or radionuclide activities) of myocardium, blood-pool, and background regions are distinct and the activities within each region of muscle, blood, or background are relatively uniform. To test the model, a mathematical phantom over a 64x64 array was constructed. The phantom had three contiguous regions. Each region had a different intensity level. Measurements from the phantom were simulated using an emission-tomography geometry. Fifty projections were generated over 180 degree, with 64 equally spaced parallel rays per projection. Projection data were randomized to contain Poisson noise. Image reconstructions were performed using an iterative maximum a posteriori probability procedure. The contiguous regions corresponding to the three intensity levels were automatically segmented. Simultaneously, the edges of the regions were sharpened. Noise in the reconstructed images was significantly suppressed. Convergence of the iterative procedure to the phantom was observed. Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities

  15. Grain-size based sea-level reconstruction in the south Bohai Sea during the past 135 kyr

    Science.gov (United States)

    Yi, Liang; Chen, Yanping

    2013-04-01

    Future anthropogenic sea-level rise and its impact on coastal regions is an important issue facing human civilizations. Due to the short nature of the instrumental record of sea-level change, development of proxies for sea-level change prior to the advent of instrumental records is essential to reconstruct long-term background sea-level changes on local, regional and global scales. Two of the most widely used approaches for past sea-level changes are: (1) exploitation of dated geomorphologic features such as coastal sands (e.g. Mauz and Hassler, 2000), salt marsh (e.g. Madsen et al., 2007), terraces (e.g. Chappell et al., 1996), and other coastal sediments (e.g. Zong et al., 2003); and (2) sea-level transfer functions based on faunal assemblages such as testate amoebae (e.g. Charman et al., 2002), foraminifera (e.g. Chappell and Shackleton, 1986; Horton, 1997), and diatoms (e.g. Horton et al., 2006). While a variety of methods has been developed to reconstruct palaeo-changes in sea level, many regions, including the Bohai Sea, China, still lack detailed relative sea-level curves extending back to the Pleistocene (Yi et al., 2012). For example, coral terraces are absent in the Bohai Sea, and the poor preservation of faunal assemblages makes development of a transfer function for a relative sea-level reconstruction unfeasible. In contrast, frequent alternations between transgression and regression has presumably imprinted sea-level change on the grain size distribution of Bohai Sea sediments, which varies from medium silt to coarse sand during the late Quaternary (IOCAS, 1985). Advantages of grainsize-based relative sea-level transfer function approaches are that they require smaller sample sizes, allowing for replication, faster measurement and higher spatial or temporal resolution at a fraction of the cost of detail micro-palaeontological analysis (Yi et al., 2012). Here, we employ numerical methods to partition sediment grain size using a combined database of

  16. Robust statistical reconstruction for charged particle tomography

    Science.gov (United States)

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  17. Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger

    Directory of Open Access Journals (Sweden)

    Rohr David

    2016-01-01

    at the Large Hadron Collider (LHC at CERN. The High Level Trigger (HLT is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC and the Inner Tracking System (ITS. The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.

  18. Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.

    Science.gov (United States)

    Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C

    2016-10-01

    The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two

  19. Evaluation of analytical reconstruction with a new gap-filling method in comparison to iterative reconstruction in [11C]-raclopride PET studies

    International Nuclear Information System (INIS)

    Tuna, U.; Johansson, J.; Ruotsalainen, U.

    2014-01-01

    The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with

  20. Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion.

    Science.gov (United States)

    Gao, Guangwei; Zhang, Lei; Yang, Jian; Zhang, Lin; Zhang, David

    2013-12-01

    Recently, a new biometrics identifier, namely finger knuckle print (FKP), has been proposed for personal authentication with very interesting results. One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations, resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database show that the proposed method significantly improves the FKP verification accuracy.

  1. Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation

    Science.gov (United States)

    Dwivedi, Shekhar

    2009-02-01

    Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.

  2. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  3. Ray tracing reconstruction investigation for C-arm tomosynthesis

    Science.gov (United States)

    Malalla, Nuhad A. Y.; Chen, Ying

    2016-04-01

    C-arm tomosynthesis is a three dimensional imaging technique. Both x-ray source and the detector are mounted on a C-arm wheeled structure to provide wide variety of movement around the object. In this paper, C-arm tomosynthesis was introduced to provide three dimensional information over a limited view angle (less than 180o) to reduce radiation exposure and examination time. Reconstruction algorithms based on ray tracing method such as ray tracing back projection (BP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were developed for C-arm tomosynthesis. C-arm tomosynthesis projection images of simulated spherical object were simulated with a virtual geometric configuration with a total view angle of 40 degrees. This study demonstrated the sharpness of in-plane reconstructed structure and effectiveness of removing out-of-plane blur for each reconstruction algorithms. Results showed the ability of ray tracing based reconstruction algorithms to provide three dimensional information with limited angle C-arm tomosynthesis.

  4. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    Science.gov (United States)

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  5. A High-Resolution Reconstruction of Late-Holocene Relative Sea Level in Rhode Island, USA

    Science.gov (United States)

    Stearns, R. B.; Engelhart, S. E.; Kemp, A.; Cahill, N.; Halavik, B. T.; Corbett, D. R.; Brain, M.; Hill, T. D.

    2017-12-01

    Studies on the US Atlantic and Gulf coasts have utilized salt-marsh peats and the macro- and microfossils preserved within them to reconstruct high-resolution records of relative sea level (RSL). We followed this approach to investigate spatial and temporal RSL variability in southern New England, USA, by reconstructing 3,300 years of RSL change in lower Narragansett Bay, Rhode Island. After reconnaisance of lower Narragansett Bay salt marshes, we recovered a 3.4m core at Fox Hill Marsh on Conanicut Island. We enumerated foraminiferal assemblages at 3cm intervals throughout the length of the core and we assessed trends in δ13C at 5 cm resolution. We developed a composite chronology (average resolution of ±50 years for a 1 cm slice) using 30 AMS radiocarbon dates and historical chronological markers of known age (137Cs, heavy metals, Pb isotopes, pollen). We assessed core compaction (mechanical compression) by collecting compaction-free basal-peat samples and using a published decompaction model. We employed fossil foraminifera and bulk sediment δ13C to estimate paleomarsh elevation using a Bayesian transfer function trained by a previously-published regional modern foraminiferal dataset. We combined the proxy RSL reconstruction and local tide-gauge measurements from Newport, Rhode Island (1931 CE to present) and estimated past rates of RSL change using an Errors-in-Variables Integrated Gaussian Process (EIV-IGP) model. Both basal peats and the decompaction model suggest that our RSL record is not significantly compacted. RSL rose from -3.9 m at 1250 BCE reaching -0.4 m at 1850 CE (1 mm/yr). We removed a Glacial Isostatic Adjustment (GIA) contribution of 0.9 mm/yr based on a local GPS site to facilitate comparison to regional records. The detrended sea-level reconstruction shows multiple departures from stable sea level (0 mm/yr) over the last 3,300 years and agrees with prior reconstructions from the US Atlantic coast showing evidence for sea-level changes that

  6. Optimized image acquisition for breast tomosynthesis in projection and reconstruction space

    International Nuclear Information System (INIS)

    Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan

    2009-01-01

    Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular

  7. Image reconstruction using three-dimensional compound Gauss-Markov random field in emission computed tomography

    International Nuclear Information System (INIS)

    Watanabe, Shuichi; Kudo, Hiroyuki; Saito, Tsuneo

    1993-01-01

    In this paper, we propose a new reconstruction algorithm based on MAP (maximum a posteriori probability) estimation principle for emission tomography. To improve noise suppression properties of the conventional ML-EM (maximum likelihood expectation maximization) algorithm, direct three-dimensional reconstruction that utilizes intensity correlations between adjacent transaxial slices is introduced. Moreover, to avoid oversmoothing of edges, a priori knowledge of RI (radioisotope) distribution is represented by using a doubly-stochastic image model called the compound Gauss-Markov random field. The a posteriori probability is maximized by using the iterative GEM (generalized EM) algorithm. Computer simulation results are shown to demonstrate validity of the proposed algorithm. (author)

  8. Accelerated median root prior reconstruction for pinhole single-photon emission tomography (SPET)

    Energy Technology Data Exchange (ETDEWEB)

    Sohlberg, Antti [Department of Clinical Physiology and Nuclear Medicine, Kuopio University Hospital, PO Box 1777 FIN-70211, Kuopio (Finland); Ruotsalainen, Ulla [Institute of Signal Processing, DMI, Tampere University of Technology, PO Box 553 FIN-33101, Tampere (Finland); Watabe, Hiroshi [National Cardiovascular Center Research Institute, 5-7-1 Fujisihro-dai, Suita City, Osaka 565-8565 (Japan); Iida, Hidehiro [National Cardiovascular Center Research Institute, 5-7-1 Fujisihro-dai, Suita City, Osaka 565-8565 (Japan); Kuikka, Jyrki T [Department of Clinical Physiology and Nuclear Medicine, Kuopio University Hospital, PO Box 1777 FIN-70211, Kuopio (Finland)

    2003-07-07

    Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods.

  9. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    Science.gov (United States)

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  10. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  11. Bayesian image reconstruction for emission tomography based on median root prior

    International Nuclear Information System (INIS)

    Alenius, S.

    1997-01-01

    The aim of the present study was to investigate a new type of Bayesian one-step late reconstruction method which utilizes a median root prior (MRP). The method favours images which have locally monotonous radioactivity concentrations. The new reconstruction algorithm was applied to ideal simulated data, phantom data and some patient examinations with PET. The same projection data were reconstructed with filtered back-projection (FBP) and maximum likelihood-expectation maximization (ML-EM) methods for comparison. The MRP method provided good-quality images with a similar resolution to the FBP method with a ramp filter, and at the same time the noise properties were as good as with Hann-filtered FBP images. The typical artefacts seen in FBP reconstructed images outside of the object were completely removed, as was the grainy noise inside the object. Quantitativley, the resulting average regional radioactivity concentrations in a large region of interest in images produced by the MRP method corresponded to the FBP and ML-EM results but at the pixel by pixel level the MRP method proved to be the most accurate of the tested methods. In contrast to other iterative reconstruction methods, e.g. ML-EM, the MRP method was not sensitive to the number of iterations nor to the adjustment of reconstruction parameters. Only the Bayesian parameter β had to be set. The proposed MRP method is much more simple to calculate than the methods described previously, both with regard to the parameter settings and in terms of general use. The new MRP reconstruction method was shown to produce high-quality quantitative emission images with only one parameter setting in addition to the number of iterations. (orig.)

  12. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  13. Statistical reconstruction for cosmic ray muon tomography.

    Science.gov (United States)

    Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J

    2007-08-01

    Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.

  14. A combined reconstruction-classification method for diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  15. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  16. Comparative polygenic analysis of maximal ethanol accumulation capacity and tolerance to high ethanol levels of cell proliferation in yeast.

    Directory of Open Access Journals (Sweden)

    Thiago M Pais

    2013-06-01

    Full Text Available The yeast Saccharomyces cerevisiae is able to accumulate ≥17% ethanol (v/v by fermentation in the absence of cell proliferation. The genetic basis of this unique capacity is unknown. Up to now, all research has focused on tolerance of yeast cell proliferation to high ethanol levels. Comparison of maximal ethanol accumulation capacity and ethanol tolerance of cell proliferation in 68 yeast strains showed a poor correlation, but higher ethanol tolerance of cell proliferation clearly increased the likelihood of superior maximal ethanol accumulation capacity. We have applied pooled-segregant whole-genome sequence analysis to identify the polygenic basis of these two complex traits using segregants from a cross of a haploid derivative of the sake strain CBS1585 and the lab strain BY. From a total of 301 segregants, 22 superior segregants accumulating ≥17% ethanol in small-scale fermentations and 32 superior segregants growing in the presence of 18% ethanol, were separately pooled and sequenced. Plotting SNP variant frequency against chromosomal position revealed eleven and eight Quantitative Trait Loci (QTLs for the two traits, respectively, and showed that the genetic basis of the two traits is partially different. Fine-mapping and Reciprocal Hemizygosity Analysis identified ADE1, URA3, and KIN3, encoding a protein kinase involved in DNA damage repair, as specific causative genes for maximal ethanol accumulation capacity. These genes, as well as the previously identified MKT1 gene, were not linked in this genetic background to tolerance of cell proliferation to high ethanol levels. The superior KIN3 allele contained two SNPs, which are absent in all yeast strains sequenced up to now. This work provides the first insight in the genetic basis of maximal ethanol accumulation capacity in yeast and reveals for the first time the importance of DNA damage repair in yeast ethanol tolerance.

  17. Regional sea level projections with observed gauge, altimeter and reconstructed data along China coast

    Science.gov (United States)

    Du, L.; Shi, H.; Zhang, S.

    2017-12-01

    Acting as the typical shelf seas in northwest Pacific Ocean, regional sea level along China coasts exhibits complicated and multiscale spatial-temporal characteristics under circumstance of global change. In this paper, sea level variability is investigated with tide gauges records, satellite altimetry data, reconstructed sea surface height, and CMIP simulation fields. Sea level exhibits the interannual variability imposing on a remarkable sea level rising in the China seas and coastal region, although its seasonal signals are significant as the results of global ocean. Sea level exhibits faster rising rate during the satellite altimetry era, nearly twice to the rate during the last sixty years. AVISO data and reconstructed sea surface heights illustrate good correlation coefficient, more than 0.8. Interannual sea level variation is mainly modulated by the low-frequency variability of wind fields over northern Pacific Ocean by local and remote processes. Meanwhile sea level varies obviously by the transport fluctuation and bimodality path of Kuroshio. Its variability possibly linked to internal variability of the ocean-atmosphere system influenced by ENSO oscillation. China Sea level have been rising during the 20th century, and are projected to continue to rise during this century. Sea level can reach the highest extreme level in latter half of 21st century. Modeled sea level including regional sea level projection combined with the IPCC climate scenarios play a significant role on coastal storm surge evolution. The vulnerable regions along the ECS coast will suffer from the increasing storm damage with sea level variations.

  18. PHOTOMETRIC STEREO SHAPE-AND-ALBEDO-FROM-SHADING FOR PIXEL-LEVEL RESOLUTION LUNAR SURFACE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    W. C. Liu

    2017-07-01

    Full Text Available Shape and Albedo from Shading (SAfS techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images, pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC photometric stereo images, the reconstructed topography (i.e., the DEM is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve

  19. Trp64Arg polymorphism of the ADRB3 gene associated with maximal fat oxidation and LDL-C levels in non-obese adolescents.

    Science.gov (United States)

    Jesus, Íncare Correa de; Alle, Lupe Furtado; Munhoz, Eva Cantalejo; Silva, Larissa Rosa da; Lopes, Wendell Arthur; Tureck, Luciane Viater; Purim, Katia Sheylla Malta; Titski, Ana Claudia Kapp; Leite, Neiva

    2017-09-21

    To analyze the association between the Trp64Arg polymorphism of the ADRB3 gene, maximal fat oxidation rates and the lipid profile levels in non-obese adolescents. 72 schoolchildren, of both genders, aged between 11 and 17 years, participated in the study. The anthropometric and body composition variables, in addition to total cholesterol, HDL-c, LDL-c, triglycerides, insulin, and basal glycemia, were evaluated. The sample was divided into two groups according to the presence or absence of the polymorphism: non-carriers of the Arg64 allele, i.e., homozygous (Trp64Trp: n=54), and carriers of the Arg64 allele (Trp64Arg+Arg64Arg: n=18), in which the frequency of the Arg64 allele was 15.2%. The maximal oxygen uptake and peak of oxygen uptake during exercise were obtained through the symptom-limited, submaximal treadmill test. Maximal fat oxidation was determined according to the ventilatory ratio proposed in Lusk's table. Adolescents carrying the less frequent allele (Trp64Arg and Arg64Arg) had higher LDL-c levels (p=0.031) and lower maximal fat oxidation rates (p=0.038) when compared with non-carriers (Trp64Trp). Although the physiological processes related to lipolysis and lipid metabolism are complex, the presence of the Arg 64 allele was associated with lower rates of FATMAX during aerobic exercise, as well as with higher levels of LDL-c in adolescents. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  20. Oral and Oropharyngeal Reconstruction with a Free Flap.

    Science.gov (United States)

    Jeong, Woo Shik; Oh, Tae Suk

    2016-06-01

    Extensive surgical resection of the aerodigestive track can result in a large and complex defect of the oropharynx, which represents a significant reconstructive challenge for the plastic surgery. Development of microsurgical techniques has allowed for free flap reconstruction of oropharyngeal defects, with superior outcomes as well as decreases in postoperative complications. The reconstructive goals for oral and oropharyngeal defects are to restore the anatomy, to maintain continuity of the intraoral surface and oropharynx, to protect vital structures such as carotid arteries, to cover exposed portions of internal organs in preparation for adjuvant radiation, and to preserve complex functions of the oral cavity and oropharynx. Oral and oropharyngeal cancers should be treated with consideration of functional recovery. Multidisciplinary treatment strategies are necessary for maximizing disease control and preserving the natural form and function of the oropharynx.

  1. Teleportation of an arbitrary two-qudit state based on the non-maximally four-qudit cluster state

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Two different schemes are presented for quantum teleportation of an arbitrary two-qudit state using a non-maximally four-qudit cluster state as the quantum channel. The first scheme is based on the Bell-basis measurements and the re-ceiver may probabilistically reconstruct the original state by performing proper transformation on her particles and an auxiliary two-level particle; the second scheme is based on the generalized Bell-basis measurements and the probability of successfully teleporting the unknown state depends on those measurements which are adjusted by Alice. A comparison of the two schemes shows that the latter has a smaller probability than that of the former and contrary to the former, the channel information and auxiliary qubit are not necessary for the receiver in the latter.

  2. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  3. Spatial sea-level reconstruction in the Baltic Sea and in the Pacific Ocean from tide gauges observations

    Directory of Open Access Journals (Sweden)

    Marco Olivieri

    2016-07-01

    Full Text Available Exploiting the Delaunay interpolation, we present a newly implemented 2-D sea-level reconstruction from coastal sea-level observations to open seas, with the aim of characterizing the spatial variability of the rate of sea-level change. To test the strengths and weaknesses of this method and to determine its usefulness in sea-level interpolation, we consider the case studies of the Baltic Sea and of the Pacific Ocean. In the Baltic Sea, a small basin well sampled by tide gauges, our reconstructions are successfully compared with absolute sea-level observations from altimetry during 1993-2011. The regional variability of absolute sea level observed across the Pacific Ocean, however, cannot be reproduced. We interpret this result as the effect of the uneven and sparse tide gauge data set and of the composite vertical land movements in and around the region. Useful considerations arise that can serve as a basis for developing sophisticated approaches.

  4. Analysis of sea-level reconstruction techniques for the Arctic Ocean

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    Sea-level reconstructions spanning several decades have been examined in numerous studies for most of the world's ocean areas, where satellite missions such as TOPEX/Poseidon and Jason-1 and -2 have provided much-improved knowledge of variability and long-term changes in sea level. However......, these dedicated oceanographic missions are limited in coverage to between ±66° latitude, and satellite altimeter data at higher latitudes is of a substantially worse quality. Following the approach of Church et al. (2004), we apply a model based on empirical orthogonal functions (EOFs) to the Arctic Ocean......, constrained by tide gauge records. A major challenge for this area is the sparsity of both satellite and tide gauge data beyond what can be covered with interpolation, necessitating a time-variable model and consideration to data preprocessing, including selection of appropriate tide gauges. In order to have...

  5. Robust reconstruction of a signal from its unthresholded recurrence plot subject to disturbances

    Energy Technology Data Exchange (ETDEWEB)

    Sipers, Aloys, E-mail: aloys.sipers@zuyd.nl [Department of Bèta Sciences and Technology, Zuyd University (Netherlands); Borm, Paul, E-mail: paul.borm@zuyd.nl [Department of Bèta Sciences and Technology, Zuyd University (Netherlands); Peeters, Ralf, E-mail: ralf.peeters@maastrichtuniversity.nl [Department of Data Science and Knowledge Engineering, Maastricht University (Netherlands)

    2017-02-12

    To make valid inferences from recurrence plots for time-delay embedded signals, two underlying key questions are: (1) to what extent does an unthresholded recurrence (URP) plot carry the same information as the signal that generated it, and (2) how does the information change when the URP gets distorted. We studied the first question in our earlier work , where it was shown that the URP admits the reconstruction of the underlying signal (up to its mean and a choice of sign) if and only if an associated graph is connected. Here we refine this result and we give an explicit condition in terms of the embedding parameters and the discrete Fourier spectrum of the URP. We also develop a method for the reconstruction of the underlying signal which overcomes several drawbacks that earlier approaches had. To address the second question we investigate robustness of the proposed reconstruction method under disturbances. We carry out two simulation experiments which are characterized by a broad band and a narrow band spectrum respectively. For each experiment we choose a noise level and two different pairs of embedding parameters. The conventional binary recurrence plot (RP) is obtained from the URP by thresholding and zero-one conversion, which can be viewed as severe distortion acting on the URP. Typically the reconstruction of the underlying signal from an RP is expected to be rather inaccurate. However, by introducing the concept of a multi-level recurrence plot (MRP) we propose to bridge the information gap between the URP and the RP, while still achieving a high data compression rate. We demonstrate the working of the proposed reconstruction procedure on MRPs, indicating that MRPs with just a few discretization levels can usually capture signal properties and morphologies more accurately than conventional RPs. - Highlights: • A recurrence plot is maximally informative if and only if the corresponding graph is connected. • The diameter of the connected graph is always

  6. INFLUENCE OF DIFFERENT LEVELS OF SPORTS ACTIVITIES ON THE QUALITY OF LIFE AFTER THE RECONSTRUCTION OF ANTERIOR CRUCIATE LIGAMENT.

    Science.gov (United States)

    Ninković, Srđan; Avramov, Snežana; Harhaji, Vladimir; Obradović, Mirko; Vranješ, Miodrag; Milankov, Miroslav

    2015-01-01

    The goal of this study was to examine the nature and presence of influence of different levels of sports activity on the life quality of the patients a year after the reconstruction of anterior cruciate ligament. The study included 185 patients operated at the Department of Orthopedic Surgery and Traumatology of the Clinical Centre of Vojvodina, who were followed for twelve months. Data were collected using the modified Knee Injury and Osteoarthritis Outcome Score questionnaire which included the Lysholm scale. This study included 146 male and 39 female subjects. The reconstruction of anterior cruciate ligament was equally successful in both gender groups. In relation to different types of sports activity, there were no differences in the overall life quality measured by the questionnaire and its subscales, regardless of the level (professional or recreational). However, regarding the level of sports activities, there were differences among the subjects engaged in sports activities at the national level as compared with those going in for sports activities at the recreational level, and particularly in comparison with physically inactive population. A significant correlation was not found by examining the aforementioned relationship between sports activities. This study has shown that the overall life quality a year after the reconstruction of the anterior cruciate ligament does not differ in relation to either the gender of the subjects or the type of sports activity, while the level of sports activity does have some influence on the quality of life. Professional athletes have proved to train significantly more intensively after this reconstruction than those going in for sports recreationally.

  7. LHCb jet reconstruction

    International Nuclear Information System (INIS)

    Francisco, Oscar; Rangel, Murilo; Barter, William; Bursche, Albert; Potterat, Cedric; Coco, Victor

    2012-01-01

    Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10 32 cm -2 s -1 and the integrated luminosity reached the value of 1,02fb -1 on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space ηX φ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its η region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)

  8. LHCb jet reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, Oscar; Rangel, Murilo [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil); Barter, William [University of Cambridge, Cambridge (United Kingdom); Bursche, Albert [Universitat Zurich, Zurich (Switzerland); Potterat, Cedric [Universitat de Barcelona, Barcelona (Spain); Coco, Victor [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands)

    2012-07-01

    Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10{sup 32} cm{sup -2}s{sup -1} and the integrated luminosity reached the value of 1,02fb{sup -1} on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space {eta}X {phi} and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its {eta} region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)

  9. LHCb; LHCb Jet Reconstruction

    CERN Multimedia

    Augusto, O

    2012-01-01

    The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than $4 \\times 10^{32} cm^{-2} s^{-1}$ and the integrated luminosity reached the value of 1.02 $fb^{-1}$ on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test pertubative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space $\\eta \\times \\phi$ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the calo...

  10. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  11. Effect of Acute Maximal Exercise on Circulating Levels of Interleukin-12 during Ramadan Fasting.

    Science.gov (United States)

    Abedelmalek, Salma; Souissi, Nizar; Takayuki, Akimoto; Hadouk, Sami; Tabka, Zouhair

    2011-09-01

    The purpose of this study was to examine the effects of Ramadan fasting on circulating levels of interleukin-12 (IL-12) after a brief maximal exercise. NINE SUBJECTS PERFORMED A WINGATE TEST ON THREE DIFFERENT OCCASIONS: (i) the first week of Ramadan (1WR), (ii) the fourth week of Ramadan (4WR), and (iii) three weeks after Ramadan (AR). Blood samples were taken before, immediately and 60 min after the exercise. Plasma concentrations of IL-12 were measured using enzyme-linked immunosorbent assay. Variance analysis revealed no significant effect of Ramadan on P(peak) and P(mean) during the three testing periods. Considering the effect of Ramadan on plasma concentrations of IL-12, analysis of the variance revealed a significant Ramadan effect (F((2,) (16))=66.27; P effect (F((2,) (16))= 120.66; P Ramadan × time) of test interaction (F((4,) (32))=2.40; P>0.05). For all measures, IL-12 levels were lower during 1WR and 4WR in comparison with AR (P effects, IL-12 levels measured immediately after the exercise were significantly higher than those measured before and at 60 minutes after the exercise (P Ramadan.

  12. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  13. Optimization of hybrid iterative reconstruction level in pediatric body CT.

    Science.gov (United States)

    Karmazyn, Boaz; Liang, Yun; Ai, Huisi; Eckert, George J; Cohen, Mervyn D; Wanner, Matthew R; Jennings, S Gregory

    2014-02-01

    The objective of our study was to attempt to optimize the level of hybrid iterative reconstruction (HIR) in pediatric body CT. One hundred consecutive chest or abdominal CT examinations were selected. For each examination, six series were obtained: one filtered back projection (FBP) and five HIR series (iDose(4)) levels 2-6. Two pediatric radiologists, blinded to noise measurements, independently chose the optimal HIR level and then rated series quality. We measured CT number (mean in Hounsfield units) and noise (SD in Hounsfield units) changes by placing regions of interest in the liver, muscles, subcutaneous fat, and aorta. A mixed-model analysis-of-variance test was used to analyze correlation of noise reduction with the optimal HIR level compared with baseline FBP noise. One hundred CT examinations were performed of 88 patients (52 females and 36 males) with a mean age of 8.5 years (range, 19 days-18 years); 12 patients had both chest and abdominal CT studies. Radiologists agreed to within one level of HIR in 92 of 100 studies. The mean quality rating was significantly higher for HIR than FBP (3.6 vs 3.3, respectively; p optimal HIR level was used (p optimal for most studies. The optimal HIR level was less effective in reducing liver noise in children with lower baseline noise.

  14. Stable reconstruction of Arctic sea level for the 1950-2010 period

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2016-01-01

    on the combination of tide gauge records and a new 20-year reprocessed satellite altimetry derived sea level pattern. Hence the study is limited to the area covered by satellite altimetry (68ºN and 82ºN). It is found that timestep cumulative reconstruction as suggested by Church and White (2000) may yield widely...... 1950 to 2010, between 68ºN and 82ºN. This value is in good agreement with the global mean trend of 1.8 +/- 0.3 mm/y over the same period as found by Church and White (2004)....

  15. The immediate intervention effects of robotic training in patients after anterior cruciate ligament reconstruction.

    Science.gov (United States)

    Hu, Chunying; Huang, Qiuchen; Yu, Lili; Ye, Miao

    2016-07-01

    [Purpose] The purpose of this study was to examine the immediate effects of robot-assisted therapy on functional activity level after anterior cruciate ligament reconstruction. [Subjects and Methods] Participants included 10 patients (8 males and 2 females) following anterior cruciate ligament reconstruction. The subjects participated in robot-assisted therapy and treadmill exercise on different days. The Timed Up-and-Go test, Functional Reach Test, surface electromyography of the vastus lateralis and vastus medialis, and maximal extensor strength of isokinetic movement of the knee joint were evaluated in both groups before and after the experiment. [Results] The results for the Timed Up-and-Go Test and the 10-Meter Walk Test improved in the robot-assisted rehabilitation group. Surface electromyography of the vastus medialis muscle showed significant increases in maximum and average discharge after the intervention. [Conclusion] The results suggest that walking ability and muscle strength can be improved by robotic training.

  16. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  17. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  18. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  19. Acute Changes in Creatine Kinase Serum Levels in Adults Submitted a Static Stretching and Maximal Strength Test

    Directory of Open Access Journals (Sweden)

    M.G. Bara Filho

    2008-01-01

    Full Text Available Strength and flexibility are common components of a training program and their maximal values are obtained through specific tests. However, little information about the damage effect of these training procedures in a skeletal muscle is known. Objective: To verify a serum CK changes 24 h after a sub maximal stretching routine and after the static flexibility and maximal strength tests. Methods: the sample was composed by 14 subjects (man and women, 28 ± 6 yr. physical education students. The volunteers were divided in a control group (CG and experimental group (EG that was submitted in a stretching routine (EG-ST, in a maximal flexibility static test (EG-FLEX and in 1-RM test (EG-1-RM, with one week interval among tests. The anthropometrics characteristics were obtained by digital scale with stadiometer (Filizola, São Paulo, Brasil, 2002. The blood samples were obtained using the IFCC method with reference values 26-155 U/L. The De Lorme and Watkins technique was used to access maximal maximal strength through bench press and leg press. The maximal flexibility test consisted in three 20 seconds sets until the point of maximal discomfort. The stretching was done in normal movement amplitude during 6 secons. Results: The basal and post 24 h CK values in CG and EG (ST; Flex and 1 RM were respectively 195,0 ± 129,5 vs. 202,1 ± 124,2; 213,3 ± 133,2 vs. 174,7 ± 115,8; 213,3 ± 133,2 vs. 226,6 ± 126,7 e 213,3 ± 133,2 vs. 275,9 ± 157,2. It was only observed a significant difference (p = 0,02 in the pre and post values inGE-1RM. Conclusion: only maximal strength dynamic exercise was capable to cause skeletal muscle damage.

  20. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NARCIS (Netherlands)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the

  1. Task-Driven Optimization of Fluence Field and Regularization for Model-Based Iterative Reconstruction in Computed Tomography.

    Science.gov (United States)

    Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster

    2017-12-01

    This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.

  2. Iterative reconstruction with attenuation compensation from cone-beam projections acquired via nonplanar orbits

    International Nuclear Information System (INIS)

    Zeng, G.L.; Weng, Y.; Gullberg, G.T.

    1997-01-01

    Single photon emission computed tomography (SPECT) imaging with cone-beam collimators provides improved sensitivity and spatial resolution for imaging small objects with large field-of-view detectors. It is known that Tuy's cone-beam data sufficiency condition must be met to obtain artifact-free reconstructions. Even though Tuy's condition was derived for an attenuation-free situation, the authors hypothesize that an artifact-free reconstruction can be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. In the authors' studies, emission data are acquired using nonplanar circle-and-line orbits to acquire cone-beam data for tomographic reconstructions. An extended iterative ML-EM (maximum likelihood-expectation maximization) reconstruction algorithm is derived and used to reconstruct projection data with either a pre-acquired or assumed attenuation map. Quantitative accuracy of the attenuation corrected emission reconstruction is significantly improved

  3. The effect of anterior cruciate ligament reconstruction on hamstring and quadriceps muscle function outcome ratios in male athletes

    Directory of Open Access Journals (Sweden)

    Kadija Marko

    2016-01-01

    Full Text Available Introduction. Maximal strength ratios such as the limb symmetry index (LSI and hamstring-to-quadriceps ratio (HQ may be considered the main outcome measures in the monitoring of recovery after anterior cruciate ligament (ACL reconstruction. Although explosive strength is much more important than maximal strength, it is generally disregarded in the follow-up of muscle function recovery. Objective. The purpose of this study was to compare ratios between maximal (Fmax and explosive strength (rate of force development - RFD in individuals with ACL reconstruction. Methods. Fifteen male athletes were enrolled and had maximum voluntary isometric quadriceps and hamstring contractions tested (4.0 ± 0.1 months post reconstruction. In addition to Fmax, RFD was estimated (RFDmax, as well as RFD at 50, 100, and 200 ms from onset of contraction and LSI and HQ ratios were calculated. Results. The involved leg demonstrated significant hamstring and quadriceps deficits compared to uninvolved leg (p < 0.01. Deficits were particularly significant in the involved quadriceps, causing higher HQ ratios (average 0.63, compared to the uninvolved leg (0.44. LSI was significantly lower for RFD variables (average 55% than for Fmax (66%. Conclusion. The assessment of RFD may be considered an objective recovery parameter for one’s readiness to return to sports and should be an integral part of standard follow-up protocol for athletes after ACL reconstruction. Moreover, the combination of indices derived from maximal and explosive strength may provide better insight in muscle strength balance, as well as a clear picture of functional implications. [Projekat Ministarstva nauke Republike Srbije, br. 175012 i br. 175037

  4. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  5. A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Qiegen Liu

    2014-01-01

    Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.

  6. Quantitation of PET data with the EM reconstruction technique

    International Nuclear Information System (INIS)

    Rosenqvist, G.; Dahlbom, M.; Erikson, L.; Bohm, C.; Blomqvist, G.

    1989-01-01

    The expectation maximization (EM) algorithm offers high spatial resolution and excellent noise reduction with low statistics PET data, since it incorporates the Poisson nature of the data. The main difficulties are long computation times, difficulties to find appropriate criteria to terminate the reconstruction and to quantify the resulting image data. In the present work a modified EM algorithm has been implements on a VAX 11/780. Its capability to quantify image data has been tested in phantom studies and in two clinical cases, cerebral blood flow studies and dopamine D2-receptor studies. Data from phantom studies indicate the superiority of images reconstructed with the EM technique compared to images reconstructed with the conventional filtered back-projection (FB) technique in areas with low statistics. At higher statistics the noise characteristics of the two techniques coincide. Clinical data support these findings

  7. Preliminary Study on the Enhancement of Reconstruction Speed for Emission Computed Tomography Using Parallel Processing

    International Nuclear Information System (INIS)

    Park, Min Jae; Lee, Jae Sung; Kim, Soo Mee; Kang, Ji Yeon; Lee, Dong Soo; Park, Kwang Suk

    2009-01-01

    Conventional image reconstruction uses simplified physical models of projection. However, real physics, for example 3D reconstruction, takes too long time to process all the data in clinic and is unable in a common reconstruction machine because of the large memory for complex physical models. We suggest the realistic distributed memory model of fast-reconstruction using parallel processing on personal computers to enable large-scale technologies. The preliminary tests for the possibility on virtual machines and various performance test on commercial super computer, Tachyon were performed. Expectation maximization algorithm with common 2D projection and realistic 3D line of response were tested. Since the process time was getting slower (max 6 times) after a certain iteration, optimization for compiler was performed to maximize the efficiency of parallelization. Parallel processing of a program on multiple computers was available on Linux with MPICH and NFS. We verified that differences between parallel processed image and single processed image at the same iterations were under the significant digits of floating point number, about 6 bit. Double processors showed good efficiency (1.96 times) of parallel computing. Delay phenomenon was solved by vectorization method using SSE. Through the study, realistic parallel computing system in clinic was established to be able to reconstruct by plenty of memory using the realistic physical models which was impossible to simplify

  8. Iterative algorithm for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution

    Science.gov (United States)

    Quan, Haiyang; Wu, Fan; Hou, Xi

    2015-10-01

    New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.

  9. Novel edge treatment method for improving the transmission reconstruction quality in Tomographic Gamma Scanning.

    Science.gov (United States)

    Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua

    2018-05-01

    Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  11. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  12. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  13. Use of regularized algebraic methods in tomographic reconstruction

    International Nuclear Information System (INIS)

    Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.

    1997-01-01

    The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable

  14. Comparison of myocardial 201Tl clearance after maximal and submaximal exercise: implications for diagnosis of coronary disease: concise communication

    International Nuclear Information System (INIS)

    Massie, B.M.; Wisneski, J.; Kramer, B.; Hollenberg, M.; Gertz, E.; Stern, D.

    1982-01-01

    Recently the quantitation of regional 201 Tl clearance has been shown to increase the sensitivity of the scintigraphic detection of coronary disease. Although 201 Tl clearance rates might be expected to vary with the degree of exercise, this relationship has not been explored. We therefore evaluated the rate of decrease in myocardial 201 Tl activity following maximal and submaximal stress in seven normal subjects and 21 patients with chest pain, using the seven-pinhole tomographic reconstruction technique. In normals, the mean 201 Tl clearance rate declined from 41% +/- 7 over a 3-hr period with maximal exercise to 25% +/- 5 after 3 hr at a submaximal level (p less than 0.001). Similar differences in clearance rates were found in the normally perfused regions of the left ventricle in patients with chest pain, depending on whether or not a maximal end point (defined as either the appearance of ischemia or reaching 85% of age-predicted heart rate) was achieved. In five patients who did not reach these end points, 3-hr clearance rates in uninvolved regions averaged 25% +/- 2, in contrast to a mean of 38% +/- 5 for such regions in 15 patients who exercised to ischemia or an adequate heart rate. These findings indicate that clearance criteria derived from normals can be applied to patients who are stressed maximally, even if the duration of exercise is limited, but that caution must be used in interpreting clearance rates in those who do not exercise to an accepted end point

  15. Quadriceps muscle function after rehabilitation with cryotherapy in patients with anterior cruciate ligament reconstruction.

    Science.gov (United States)

    Hart, Joseph M; Kuenze, Christopher M; Diduch, David R; Ingersoll, Christopher D

    2014-01-01

    Persistent muscle weakness after anterior cruciate ligament (ACL) reconstruction may be due to underlying activation failure and arthrogenic muscle inhibition (AMI). Knee-joint cryotherapy has been shown to improve quadriceps function transiently in those with AMI, thereby providing an opportunity to improve quadriceps muscle activation and strength in patients with a reconstructed ACL. To compare quadriceps muscle function in patients with a reconstructed ACL who completed a 2-week intervention including daily cryotherapy (ice bag), daily exercises, or both. Cross-sectional study. Laboratory. A total of 30 patients with reconstructed ACLs who were at least 6 months post-index surgery and had measurable quadriceps AMI. The patients attended 4 supervised visits over a 2-week period. They were randomly assigned to receive 20 minutes of knee-joint cryotherapy, 1 hour of therapeutic rehabilitation exercises, or cryotherapy followed by exercises. We measured quadriceps Hoffmann reflex, normalized maximal voluntary isometric contraction torque, central activation ratio using the superimposed-burst technique, and patient-reported outcomes before and after the intervention period. After the 2-week intervention period, patients who performed rehabilitation exercises immediately after cryotherapy had higher normalized maximal voluntary isometric contraction torques (P = .002, Cohen d effect size = 1.4) compared with those who received cryotherapy alone (P = .16, d = 0.58) or performed exercise alone (P = .16, d = 0.30). After ACL reconstruction, patients with AMI who performed rehabilitation exercises immediately after cryotherapy experienced greater strength gains than those who performed cryotherapy or exercises alone.

  16. A new method of morphological comparison for bony reconstructive surgery: maxillary reconstruction using scapular tip bone

    Science.gov (United States)

    Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.

    2010-02-01

    esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm

  17. Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data

    Science.gov (United States)

    Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.

    2013-01-01

    A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

  18. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    Science.gov (United States)

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  19. PR-CALC: A Program for the Reconstruction of NMR Spectra from Projections

    International Nuclear Information System (INIS)

    Coggins, Brian E.; Zhou Pei

    2006-01-01

    Projection-reconstruction NMR (PR-NMR) has attracted growing attention as a method for collecting multidimensional NMR data rapidly. The PR-NMR procedure involves measuring lower-dimensional projections of a higher-dimensional spectrum, which are then used for the mathematical reconstruction of the full spectrum. We describe here the program PR-CALC, for the reconstruction of NMR spectra from projection data. This program implements a number of reconstruction algorithms, highly optimized to achieve maximal performance, and manages the reconstruction process automatically, producing either full spectra or subsets, such as regions or slices, as requested. The ability to obtain subsets allows large spectra to be analyzed by reconstructing and examining only those subsets containing peaks, offering considerable savings in processing time and storage space. PR-CALC is straightforward to use, and integrates directly into the conventional pipeline for data processing and analysis. It was written in standard C+ + and should run on any platform. The organization is flexible, and permits easy extension of capabilities, as well as reuse in new software. PR-CALC should facilitate the widespread utilization of PR-NMR in biomedical research

  20. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  1. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    DEFF Research Database (Denmark)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...

  2. Coronal reconstruction of unenhanced abdominal CT for correct ureteral stone size classification

    Energy Technology Data Exchange (ETDEWEB)

    Berkovitz, Nadav; Simanovsky, Natalia; Hiller, Nurith [Hadassah Mount Scopus - Hebrew University Medical Center, Department of Radiology, Jerusalem (Israel); Katz, Ran [Hadassah Mount Scopus - Hebrew University Medical Center, Department of Urology, Jerusalem (Israel); Salama, Shaden [Hadassah Mount Scopus - Hebrew University Medical Center, Department of Emergency Medicine, Jerusalem (Israel)

    2010-05-15

    To determine whether size measurement of a urinary calculus in coronal reconstruction of computed tomography (CT) differs from stone size measured in the axial plane, and whether the difference alters clinical decision making. We retrospectively reviewed unenhanced CT examinations of 150 patients admitted to the emergency room (ER) with acute renal colic. Maximal ureteral calculus size was measured on axial slices and coronal reconstructions. Clinical significance was defined as an upgrading or downgrading of stone size according to accepted thresholds of treatment: {<=}5 mm, 6-9 mm and {>=}10 mm. There were 151 stones in 150 patients (male:female 115:34, mean age 41 years). Transverse stone diameters ranged from 1 to 11 mm (mean 4 mm). On coronal images, 56 (37%) stones were upgraded in severity; 46 (30%) from below 5 mm to 6 mm or more, and ten (7%) from 6-9 mm to 10 mm or more. Transverse measurement on the axial slices enabled correct categorization of 95 stones (63%). Transverse calculus measurement on axial slices often underestimates stone size and provides incorrect clinical classification of the true maximal stone diameter. Coronal reconstruction provides additional information in patients with renal colic that may alter treatment strategy. (orig.)

  3. Coronal reconstruction of unenhanced abdominal CT for correct ureteral stone size classification

    International Nuclear Information System (INIS)

    Berkovitz, Nadav; Simanovsky, Natalia; Hiller, Nurith; Katz, Ran; Salama, Shaden

    2010-01-01

    To determine whether size measurement of a urinary calculus in coronal reconstruction of computed tomography (CT) differs from stone size measured in the axial plane, and whether the difference alters clinical decision making. We retrospectively reviewed unenhanced CT examinations of 150 patients admitted to the emergency room (ER) with acute renal colic. Maximal ureteral calculus size was measured on axial slices and coronal reconstructions. Clinical significance was defined as an upgrading or downgrading of stone size according to accepted thresholds of treatment: ≤5 mm, 6-9 mm and ≥10 mm. There were 151 stones in 150 patients (male:female 115:34, mean age 41 years). Transverse stone diameters ranged from 1 to 11 mm (mean 4 mm). On coronal images, 56 (37%) stones were upgraded in severity; 46 (30%) from below 5 mm to 6 mm or more, and ten (7%) from 6-9 mm to 10 mm or more. Transverse measurement on the axial slices enabled correct categorization of 95 stones (63%). Transverse calculus measurement on axial slices often underestimates stone size and provides incorrect clinical classification of the true maximal stone diameter. Coronal reconstruction provides additional information in patients with renal colic that may alter treatment strategy. (orig.)

  4. Homotopic non-local regularized reconstruction from sparse positron emission tomography measurements

    International Nuclear Information System (INIS)

    Wong, Alexander; Liu, Chenyi; Wang, Xiao Yu; Fieguth, Paul; Bie, Hongxia

    2015-01-01

    Positron emission tomography scanners collect measurements of a patient’s in vivo radiotracer distribution. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide (tracer), which is introduced into the body on a biologically active molecule, and the tomograms must be reconstructed from projections. The reconstruction of tomograms from the acquired PET data is an inverse problem that requires regularization. The use of tightly packed discrete detector rings, although improves signal-to-noise ratio, are often associated with high costs of positron emission tomography systems. Thus a sparse reconstruction, which would be capable of overcoming the noise effect while allowing for a reduced number of detectors, would have a great deal to offer. In this study, we introduce and investigate the potential of a homotopic non-local regularization reconstruction framework for effectively reconstructing positron emission tomograms from such sparse measurements. Results obtained using the proposed approach are compared with traditional filtered back-projection as well as expectation maximization reconstruction with total variation regularization. A new reconstruction method was developed for the purpose of improving the quality of positron emission tomography reconstruction from sparse measurements. We illustrate that promising reconstruction performance can be achieved for the proposed approach even at low sampling fractions, which allows for the use of significantly fewer detectors and have the potential to reduce scanner costs

  5. Time-of-flight PET image reconstruction using origin ensembles

    Science.gov (United States)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  6. The SRT reconstruction algorithm for semiquantification in PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)

    2015-10-15

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  7. The SRT reconstruction algorithm for semiquantification in PET imaging

    International Nuclear Information System (INIS)

    Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.

    2015-01-01

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of 18 F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  8. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  9. Evaluation of penalized likelihood estimation reconstruction on a digital time-of-flight PET/CT scanner for 18F-FDG whole-body examinations.

    Science.gov (United States)

    Lindström, Elin; Sundin, Anders; Trampal, Carlos; Lindsjö, Lars; Ilan, Ezgi; Danfors, Torsten; Antoni, Gunnar; Sörensen, Jens; Lubberink, Mark

    2018-02-15

    Resolution and quantitative accuracy of positron emission tomography (PET) are highly influenced by the reconstruction method. Penalized likelihood estimation algorithms allow for fully convergent iterative reconstruction, generating a higher image contrast while limiting noise compared to ordered subsets expectation maximization (OSEM). In this study, block-sequential regularized expectation maximization (BSREM) was compared to time-of-flight OSEM (TOF-OSEM). Various strengths of noise penalization factor β were tested along with scan durations and transaxial field of views (FOVs) with the aim to evaluate the performance and clinical use of BSREM for 18 F-FDG-PET-computed tomography (CT), both in quantitative terms and in a qualitative visual evaluation. Methods: Eleven clinical whole-body 18 F-FDG-PET/CT examinations acquired on a digital TOF PET/CT scanner were included. The data were reconstructed using BSREM with point spread function (PSF) recovery and β 133, 267, 400 and 533, and TOF-OSEM with PSF, for various acquisition times/bed position (bp) and FOVs. Noise, signal-to-noise ratio (SNR), signal-to-background ratio (SBR), and standardized uptake values (SUVs) were analysed. A blinded visual image quality evaluation, rating several aspects, performed by two nuclear medicine physicians complemented the analysis. Results: The lowest levels of noise were reached with the highest β resulting in the highest SNR, which in turn resulted in the lowest SBR. Noise equivalence to TOF-OSEM was found with β 400 but produced a significant increase of SUV max (11%), SNR (22%) and SBR (12%) compared to TOF-OSEM. BSREM with β 533 at decreased acquisition (2 min/bp) was comparable to TOF-OSEM at full acquisition duration (3 min/bp). Reconstructed FOV had an impact on BSREM outcome measures, SNR increased while SBR decreased when shifting FOV from 70 to 50 cm. The visual image quality evaluation resulted in similar scores for reconstructions although β 400 obtained the

  10. Differences between the last two glacial maxima and implications for ice-sheet, δ18O, and sea-level reconstructions

    NARCIS (Netherlands)

    Rohling, Eelco J; Hibbert, Fiona D.; Williams, Felicity H.; Grant, Katharine M; Marino, Gianluca; Foster, Gavin L; Hennekam, Rick|info:eu-repo/dai/nl/357286081; de Lange, Gert J.|info:eu-repo/dai/nl/073930962; Roberts, Andrew P.; Yu, Jimin; Webster, Jody M.; Yokoyama, Yusuke

    2017-01-01

    Studies of past glacial cycles yield critical information about climate and sea-level (ice-volume) variability, including the sensitivity of climate to radiative change, and impacts of crustal rebound on sea-level reconstructions for past interglacials. Here we identify significant differences

  11. Storm Surge Reconstruction and Return Water Level Estimation in Southeast Asia for the 20th Century

    NARCIS (Netherlands)

    Cid, Alba; Wahl, Thomas; Chambers, Don P.; Muis, Sanne

    2018-01-01

    We present a methodology to reconstruct the daily maximum storm surge levels, obtained from tide gauges, based on the surrounding atmospheric conditions from an atmospheric reanalysis (20th Century Reanalysis-20CR). Tide gauge records in Southeast Asia are relatively short, so this area is often

  12. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Science.gov (United States)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  13. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    International Nuclear Information System (INIS)

    Pereira, N F; Sitek, A

    2010-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  14. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)

    2010-09-21

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  15. Parametric image reconstruction using spectral analysis of PET projection data

    International Nuclear Information System (INIS)

    Meikle, Steven R.; Matthews, Julian C.; Cunningham, Vincent J.; Bailey, Dale L.; Livieratos, Lefteris; Jones, Terry; Price, Pat

    1998-01-01

    Spectral analysis is a general modelling approach that enables calculation of parametric images from reconstructed tracer kinetic data independent of an assumed compartmental structure. We investigated the validity of applying spectral analysis directly to projection data motivated by the advantages that: (i) the number of reconstructions is reduced by an order of magnitude and (ii) iterative reconstruction becomes practical which may improve signal-to-noise ratio (SNR). A dynamic software phantom with typical 2-[ 11 C]thymidine kinetics was used to compare projection-based and image-based methods and to assess bias-variance trade-offs using iterative expectation maximization (EM) reconstruction. We found that the two approaches are not exactly equivalent due to properties of the non-negative least-squares algorithm. However, the differences are small ( 1 and, to a lesser extent, VD). The optimal number of EM iterations was 15-30 with up to a two-fold improvement in SNR over filtered back projection. We conclude that projection-based spectral analysis with EM reconstruction yields accurate parametric images with high SNR and has potential application to a wide range of positron emission tomography ligands. (author)

  16. Advanced virtual monochromatic reconstruction of dual-energy unenhanced brain computed tomography in children: comparison of image quality against standard mono-energetic images and conventional polychromatic computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Park, Juil [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Choi, Young Hun [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Pak, Seong Yong [Siemens Healthineers, Seoul (Korea, Republic of); Krauss, Bernhard [Siemens Healthineers, Forchheim (Germany)

    2017-11-15

    Advanced virtual monochromatic reconstruction from dual-energy brain CT has not been evaluated in children. To determine the most effective advanced virtual monochromatic imaging energy level for maximizing pediatric brain parenchymal image quality in dual-energy unenhanced brain CT and to compare this technique with conventional monochromatic reconstruction and polychromatic scanning. Using both conventional (Mono) and advanced monochromatic reconstruction (Mono+) techniques, we retrospectively reconstructed 13 virtual monochromatic imaging energy levels from 40 keV to 100 keV in 5-keV increments from dual-source, dual-energy unenhanced brain CT scans obtained in 23 children. We analyzed gray and white matter noise ratios, signal-to-noise ratios and contrast-to-noise ratio, and posterior fossa artifact. We chose the optimal mono-energetic levels and compared them with conventional CT. For Mono+maximum optima were observed at 60 keV, and minimum posterior fossa artifact at 70 keV. For Mono, optima were at 65-70 keV, with minimum posterior fossa artifact at 75 keV. Mono+ was superior to Mono and to polychromatic CT for image-quality measures. Subjective analysis rated Mono+superior to other image sets. Optimal virtual monochromatic imaging using Mono+ algorithm demonstrated better image quality for gray-white matter differentiation and reduction of the artifact in the posterior fossa. (orig.)

  17. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels

    Science.gov (United States)

    Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.

    2017-07-01

    Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T  =  K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C

  18. Bioengineered human IAS reconstructs with functional and molecular properties similar to intact IAS

    Science.gov (United States)

    Singh, Jagmohan

    2012-01-01

    Because of its critical importance in rectoanal incontinence, we determined the feasibility to reconstruct internal anal sphincter (IAS) from human IAS smooth muscle cells (SMCs) with functional and molecular attributes similar to the intact sphincter. The reconstructs were developed using SMCs from the circular smooth muscle layer of the human IAS, grown in smooth muscle differentiation media under sterile conditions in Sylgard-coated tissue culture plates with central Sylgard posts. The basal tone in the reconstructs and its changes were recorded following 0 Ca2+, KCl, bethanechol, isoproterenol, protein kinase C (PKC) activator phorbol 12,13-dibutyrate, and Rho kinase (ROCK) and PKC inhibitors Y-27632 and Gö-6850, respectively. Western blot (WB), immunofluorescence (IF), and immunocytochemical (IC) analyses were also performed. The reconstructs developed spontaneous tone (0.68 ± 0.26 mN). Bethanechol (a muscarinic agonist) and K+ depolarization produced contraction, whereas isoproterenol (β-adrenoceptor agonist) and Y-27632 produced a concentration-dependent decrease in the tone. Maximal decrease in basal tone with Y-27632 and Gö-6850 (each 10−5 M) was 80.45 ± 3.29 and 17.76 ± 3.50%, respectively. WB data with the IAS constructs′ SMCs revealed higher levels of RhoA/ROCK, protein kinase C-potentiated inhibitor or inhibitory phosphoprotein for myosin phosphatase (CPI-17), phospho-CPI-17, MYPT1, and 20-kDa myosin light chain vs. rectal smooth muscle. WB, IF, and IC studies of original SMCs and redispersed from the reconstructs for the relative distribution of different signal transduction proteins confirmed the feasibility of reconstruction of IAS with functional properties similar to intact IAS and demonstrated the development of myogenic tone with critical dependence on RhoA/ROCK. We conclude that it is feasible to bioengineer IAS constructs using human IAS SMCs that behave like intact IAS. PMID:22790596

  19. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  20. Determination of the optimal dose reduction level via iterative reconstruction using 640-slice volume chest CT in a pig model.

    Directory of Open Access Journals (Sweden)

    Xingli Liu

    Full Text Available To determine the optimal dose reduction level of iterative reconstruction technique for paediatric chest CT in pig models.27 infant pigs underwent 640-slice volume chest CT with 80kVp and different mAs. Automatic exposure control technique was used, and the index of noise was set to SD10 (Group A, routine dose, SD12.5, SD15, SD17.5, SD20 (Groups from B to E to reduce dose respectively. Group A was reconstructed with filtered back projection (FBP, and Groups from B to E were reconstructed using iterative reconstruction (IR. Objective and subjective image quality (IQ among groups were compared to determine an optimal radiation reduction level.The noise and signal-to-noise ratio (SNR in Group D had no significant statistical difference from that in Group A (P = 1.0. The scores of subjective IQ in Group A were not significantly different from those in Group D (P>0.05. There were no obvious statistical differences in the objective and subjective index values among the subgroups (small, medium and large subgroups of Group D. The effective dose (ED of Group D was 58.9% lower than that of Group A (0.20±0.05mSv vs 0.48±0.10mSv, p <0.001.In infant pig chest CT, using iterative reconstruction can provide diagnostic image quality; furthermore, it can reduce the dosage by 58.9%.

  1. MRI isotropic resolution reconstruction from two orthogonal scans

    Science.gov (United States)

    Tamez-Pena, Jose G.; Totterman, Saara; Parker, Kevin J.

    2001-07-01

    An algorithm for the reconstructions of ISO-resolution volumetric MR data sets from two standard orthogonal MR scans having anisotropic resolution has been developed. The reconstruction algorithm starts by registering a pair of orthogonal volumetric MR data sets. The registration is done by maximizing the correlation between the gradient magnitude using a simple translation-rotation model in a multi-resolution approach. Then algorithm assumes that the individual voxels on the MR data are an average of the magnetic resonance properties of an elongated imaging volume. Then, the process is modeled as the projection of MR properties into a single sensor. This model allows the derivation of a set of linear equations that can be used to recover the MR properties of every single voxel in the SO-resolution volume given only two orthogonal MR scans. Projections on convex sets (POCS) was used to solve the set of linear equations. Experimental results show the advantage of having a ISO-resolution reconstructions for the visualization and analysis of small and thin muscular structures.

  2. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    Science.gov (United States)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  3. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    Science.gov (United States)

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  4. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  5. Cartilage grafting in nasal reconstruction.

    Science.gov (United States)

    Immerman, Sara; White, W Matthew; Constantinides, Minas

    2011-02-01

    Nasal reconstruction after resection for cutaneous malignancies poses a unique challenge to facial plastic surgeons. The nose, a unique 3-D structure, not only must remain functional but also be aesthetically pleasing to patients. A complete understanding of all the layers of the nose and knowledge of available cartilage grafting material is necessary. Autogenous material, namely septal, auricular, and costal cartilage, is the most favored material in a free cartilage graft or a composite cartilage graft. All types of material have advantages and disadvantages that should guide the most appropriate selection to maximize the functional and cosmetic outcomes for patients. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  7. Quantum state engineering and reconstruction in cavity QED. An analytical approach

    International Nuclear Information System (INIS)

    Lougovski, P.

    2004-01-01

    The models of a strongly-driven micromaser and a one-atom laser are developed. Their analytical solutions are obtained by means of phase space techniques. It is shown how to exploit the model of a one-atom laser for simultaneous generation and monitoring of the decoherence of the atom-field ''Schroedinger cat'' states. The similar machinery applied to the problem of the generation of the maximally-entangled states of two atoms placed inside an optical cavity permits its analytical solution. The steady-state solution of the problem exhibits a structure in which the two-atom maximally-entangled state correlates with the vacuum state of the cavity. As a consequence, it is demonstrated that the atomic maximally-entangled state, depending on a coupling regime, can be produced via a single or a sequence of no-photon measurements. The question of the implementation of a quantum memory device using a dispersive interaction between the collective internal ground state of an atomic ensemble and two orthogonal modes of a cavity is addressed. The problem of quantum state reconstruction in the context of cavity quantum electrodynamics is considered. The optimal operational definition of the Wigner function of a cavity field is worked out. It is based on the Fresnel transform of the atomic inversion of a probe atom. The general integral transformation for the Wigner function reconstruction of a particle in an arbitrary symmetric potential is derived

  8. Biomechanical Comparison of Five Posterior Cruciate Ligament Reconstruction Techniques.

    Science.gov (United States)

    Nuelle, Clayton W; Milles, Jeffrey L; Pfeiffer, Ferris M; Stannard, James P; Smith, Patrick A; Kfuri, Mauricio; Cook, James L

    2017-07-01

    No surgical technique recreates native posterior cruciate ligament (PCL) biomechanics. We compared the biomechanics of five different PCL reconstruction techniques versus the native PCL. Cadaveric knees ( n  = 20) were randomly assigned to one of five reconstruction techniques: Single bundle all-inside arthroscopic inlay, single bundle all-inside suspensory fixation, single bundle arthroscopic-assisted open onlay (SB-ONL), double bundle arthroscopic-assisted open inlay (DB-INL), and double bundle all-inside suspensory fixation (DB-SUSP). Each specimen was potted and connected to a servo-hydraulic load frame for testing in three conditions: PCL intact, PCL deficient, and PCL reconstructed. Testing consisted of a posterior force up to 100 N at a rate of 1 N/s at four knee flexion angles: 10, 30, 60, and 90 degrees. Three material properties were measured under each condition: load to 5 mm displacement, maximal displacement, and stiffness. Data were normalized to the native PCL, compared across techniques, compared with all PCL-intact knees and to all PCL-deficient knees using one-way analysis of variance. For load to 5 mm displacement, intact knees required significantly ( p  < 0.03) more load at 30 degrees of flexion than all reconstructions except the DB-SUSP. At 60 degrees of flexion, intact required significantly ( p  < 0.01) more load than all others except the SB-ONL. At 90 degrees, intact, SB-ONL, DB-INL, and DB-SUSP required significantly more load ( p  < 0.05). Maximal displacement testing showed the intact to have significantly ( p  < 0.02) less laxity than all others except the DB-INL and DB-SUSP at 60 degrees. At 90 degrees the intact showed significantly ( p  < 0.01) less laxity than all others except the DB-SUSP. The intact was significantly stiffer than all others at 30 degrees ( p  < 0.03) and 60 degrees ( p  < 0.01). Finally, the intact was significantly ( p  < 0.05) stiffer than all others except the DB

  9. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  10. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    Science.gov (United States)

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  11. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  12. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  13. Improved proton computed tomography by dual modality image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, David C., E-mail: dch@ki.au.dk; Bassler, Niels [Experimental Clinical Oncology, Aarhus University, 8000 Aarhus C (Denmark); Petersen, Jørgen Breede Baltzer [Medical Physics, Aarhus University Hospital, 8000 Aarhus C (Denmark); Sørensen, Thomas Sangild [Computer Science, Aarhus University, 8000 Aarhus C, Denmark and Clinical Medicine, Aarhus University, 8200 Aarhus N (Denmark)

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  14. Improved proton computed tomography by dual modality image reconstruction

    International Nuclear Information System (INIS)

    Hansen, David C.; Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-01-01

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  15. Reconstruction of Local Sea Levels at South West Pacific Islands—A Multiple Linear Regression Approach (1988-2014)

    Science.gov (United States)

    Kumar, V.; Melet, A.; Meyssignac, B.; Ganachaud, A.; Kessler, W. S.; Singh, A.; Aucan, J.

    2018-02-01

    Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years has been up to 3 times the global average. In this study, we aim at reconstructing sea levels at selected sites in the region (Suva, Lautoka—Fiji, and Nouméa—New Caledonia) as a multilinear regression (MLR) of atmospheric and oceanic variables. We focus on sea level variability at interannual-to-interdecadal time scales, and trend over the 1988-2014 period. Local sea levels are first expressed as a sum of steric and mass changes. Then a dynamical approach is used based on wind stress curl as a proxy for the thermosteric component, as wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. Statistically significant predictors among wind stress curl, halosteric sea level, zonal/meridional wind stress components, and sea surface temperature are used to construct a MLR model simulating local sea levels. Although we are focusing on the local scale, the global mean sea level needs to be adjusted for. Our reconstructions provide insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. On average, the MLR model is able to reproduce 82% of the variance in island sea level, and could be used to derive local sea level projections via downscaling of climate models.

  16. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  17. Sea level reconstruction: Exploration of methods for combining altimetry with other data to beyond the 20-year altimetric record

    DEFF Research Database (Denmark)

    Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg

    2012-01-01

    Ocean satellite altimetry has provided global sets of sea level data for the last two decades, allowing determination of spatial patterns in global sea level. For reconstructions going back further than this period, tide gauge data can be used as a proxy for the model. We examine different methods...... to spatial distribution, and tide gauge data are available around the Arctic Ocean, which may be important for a later high-latitude reconstruction....... of combining satellite altimetry and tide gauge data using optimal weighting of tide gauge data, linear regression and EOFs, including automatic quality checks of the tide gauge time series. We attempt to augment the model using various proxies such as climate indices like the NAO and PDO, and investigate...

  18. Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer

    Science.gov (United States)

    Pearson, T.; Giffard, P.; Beckstrom-Sternberg, S.; Auerbach, R.; Hornstra, H.; Tuanyok, A.; Price, E.P.; Glass, M.B.; Leadem, B.; Beckstrom-Sternberg, J. S.; Allan, G.J.; Foster, J.T.; Wagner, D.M.; Okinaka, R.T.; Sim, S.H.; Pearson, O.; Wu, Z.; Chang, J.; Kaul, R.; Hoffmaster, A.R.; Brettin, T.S.; Robison, R.A.; Mayo, M.; Gee, J.E.; Tan, P.; Currie, B.J.; Keim, P.

    2009-01-01

    Background: Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results: Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia. Conclusion: We describe an

  19. Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer

    Directory of Open Access Journals (Sweden)

    Kaul Rajinder

    2009-11-01

    Full Text Available Abstract Background Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia

  20. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  1. Atlas of the Underworld : Paleo-subduction, -geography, -atmosphere and -sea level reconstructed from present-day mantle structure

    NARCIS (Netherlands)

    van der Meer, Douwe G.

    2017-01-01

    In this thesis, I aimed at searching for new ways of constraining paleo-geographic, -atmosphere and -sea level reconstructions, through an extensive investigation of mantle structure in seismic tomographic models. To this end, I explored evidence for paleo-subduction in these models and how this may

  2. REGEN: Ancestral Genome Reconstruction for Bacteria

    OpenAIRE

    Yang, Kuan; Heath, Lenwood S.; Setubal, João C.

    2012-01-01

    Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deleti...

  3. Altered lower extremity joint mechanics occur during the star excursion balance test and single leg hop after ACL-reconstruction in a collegiate athlete.

    Science.gov (United States)

    Samaan, Michael A; Ringleb, Stacie I; Bawab, Sebastian Y; Greska, Eric K; Weinhandl, Joshua T

    2018-03-01

    The effects of ACL-reconstruction on lower extremity joint mechanics during performance of the Star Excursion Balance Test (SEBT) and Single Leg Hop (SLH) are limited. The purpose of this study was to determine if altered lower extremity mechanics occur during the SEBT and SLH after ACL-reconstruction. One female Division I collegiate athlete performed the SEBT and SLH tasks, bilaterally, both before ACL injury and 27 months after ACL-reconstruction. Maximal reach, hop distances, lower extremity joint kinematics and moments were compared between both time points. Musculoskeletal simulations were used to assess muscle force production during the SEBT and SLH at both time points. Compared to the pre-injury time point, SEBT reach distances were similar in both limbs after ACL-reconstruction except for the max anterior reach distance in the ipsilateral limb. The athlete demonstrated similar hop distances, bilaterally, after ACL-reconstruction compared to the pre-injury time point. Despite normal functional performance during the SEBT and SLH, the athlete exhibited altered lower extremity joint mechanics during both of these tasks. These results suggest that measuring the maximal reach and hop distances for these tasks, in combination with an analysis of the lower extremity joint mechanics that occur after ACL-reconstruction, may help clinicians and researchers to better understand the effects of ACL-reconstruction on the neuromuscular system during the SEBT and SLH.

  4. Maximal voluntary contraction force, SR function and glycogen resynthesis during the first 72 h after a high-level competitive soccer game

    DEFF Research Database (Denmark)

    Krustrup, Peter; Ørtenblad, Niels; Nielsen, Joachim

    2011-01-01

    The aim of this study was to examine maximal voluntary knee-extensor contraction force (MVC force), sarcoplasmic reticulum (SR) function and muscle glycogen levels in the days after a high-level soccer game when players ingested an optimised diet. Seven high-level male soccer players had a vastus...... lateralis muscle biopsy and a blood sample collected in a control situation and at 0, 24, 48 and 72 h after a competitive soccer game. MVC force, SR function, muscle glycogen, muscle soreness and plasma myoglobin were measured. MVC force sustained over 1 s was 11 and 10% lower (P ...

  5. Model-based respiratory motion compensation for emission tomography image reconstruction

    International Nuclear Information System (INIS)

    Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J

    2007-01-01

    In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data

  6. Bayesian image reconstruction for improving detection performance of muon tomography.

    Science.gov (United States)

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  7. Improving LiDAR Biomass Model Uncertainty through Non-Destructive Allometry and Plot-level 3D Reconstruction with Terrestrial Laser Scanning

    Science.gov (United States)

    Stovall, A. E.; Shugart, H. H., Jr.

    2017-12-01

    Future NASA and ESA satellite missions plan to better quantify global carbon through detailed observations of forest structure, but ultimately rely on uncertain ground measurement approaches for calibration and validation. A significant amount of the uncertainty in estimating plot-level biomass can be attributed to inadequate and unrepresentative allometric relationships used to convert plot-level tree measurements to estimates of aboveground biomass. These allometric equations are known to have high errors and biases, particularly in carbon rich forests because they were calibrated with small and often biased samples of destructively harvested trees. To overcome this issue, a non-destructive methodology for estimating tree and plot-level biomass has been proposed through the use of Terrestrial Laser Scanning (TLS). We investigated the potential for using TLS as a ground validation approach in LiDAR-based biomass mapping though virtual plot-level tree volume reconstruction and biomass estimation. Plot-level biomass estimates were compared on the Virginia-based Smithsonian Conservation Biology Institute's SIGEO forest with full 3D reconstruction, TLS allometry, and Jenkins et al. (2003) allometry. On average, full 3D reconstruction ultimately provided the lowest uncertainty estimate of plot-level biomass (9.6%), followed by TLS allometry (16.9%) and the national equations (20.2%). TLS offered modest improvements to the airborne LiDAR empirical models, reducing RMSE from 16.2% to 14%. Our findings suggest TLS plot acquisitions and non-destructive allometry can play a vital role for reducing uncertainty in calibration and validation data for biomass mapping in the upcoming NASA and ESA missions.

  8. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  9. Connected Filtering by Reconstruction : Basis and New Advances

    NARCIS (Netherlands)

    Wilkinson, Michael H.F.

    2008-01-01

    Openings-by-reconstruction are the oldest connected filters, and indeed, reconstruction methodology lies at the heart of many connected operators such as levelings. Starting out from the basic reconstruction principle of iterated geodesic dilations, extensions such as the use of reconstruction

  10. CONNECTED FILTERING BY RECONSTRUCTION : BASIS AND NEW ADVANCES

    NARCIS (Netherlands)

    Wilkinson, Michael H. F.

    2008-01-01

    Openings-by-reconstruction are the oldest connected filters, and indeed, reconstruction methodology lies at the heart of many connected operators such as levelings. Starting out from the basic reconstruction principle of iterated geodesic dilations, extensions such as the use of reconstruction

  11. The Impact of Expander Inflation/Deflation Status During Adjuvant Radiotherapy on the Complications of Immediate Two-Stage Breast Reconstruction.

    Science.gov (United States)

    Woo, Kyong-Je; Paik, Joo-Myeong; Bang, Sa Ik; Mun, Goo-Hyun; Pyon, Jai-Kyong

    2017-06-01

    The question of whether expander inflation/deflation status has any bearing on surgical complications in the setting of adjuvant radiation (XRT) has not been addressed. The objective of this study is to investigate whether the inflation/deflation status of the expander at the time of XRT is associated with complications in immediate two-stage expander-implant breast reconstruction. A retrospective review of 49 consecutive patients who underwent immediate two-stage expander-implant breast reconstruction and received post-mastectomy XRT was conducted. Full deflation of the expanders was performed in the deflation group (20 patients), while the expanders remained inflated in the inflation group at the time of XRT (29 patients). XRT-related complications of each stage of reconstructions were compared between the two groups, and multivariable regression analysis was performed to identify risk factors for XRT-related complications. Overall XRT-related complications (65.0 vs. 6.9%, p deflation group. The most common cause of reconstruction failure in the deflation group was failure to re-expand due to skin fibrosis and contracture. In multivariable analysis, deflation of expanders was a significant risk factor for overall complications (odds = 94.4, p = 0.001) and reconstruction failures (odds = 9.09, p = 0.022) of the first-stage reconstructions. Maximal inflation without deflation before XRT can be an option to minimize XRT-related complications and reconstruction failure of the first-stage reconstructions. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  12. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    DEFF Research Database (Denmark)

    Galavis, P.E.; Hollensen, Christian; Jallow, N.

    2010-01-01

    Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes...... reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were...... classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small...

  13. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  14. On the importance of considering heterogeneity in witnesses' competence levels when reconstructing crimes from multiple witness testimonies.

    Science.gov (United States)

    Waubert de Puiseau, Berenike; Greving, Sven; Aßfalg, André; Musch, Jochen

    2017-09-01

    Aggregating information across multiple testimonies may improve crime reconstructions. However, different aggregation methods are available, and research on which method is best suited for aggregating multiple observations is lacking. Furthermore, little is known about how variance in the accuracy of individual testimonies impacts the performance of competing aggregation procedures. We investigated the superiority of aggregation-based crime reconstructions involving multiple individual testimonies and whether this superiority varied as a function of the number of witnesses and the degree of heterogeneity in witnesses' ability to accurately report their observations. Moreover, we examined whether heterogeneity in competence levels differentially affected the relative accuracy of two aggregation procedures: a simple majority rule, which ignores individual differences, and the more complex general Condorcet model (Romney et al., Am Anthropol 88(2):313-338, 1986; Batchelder and Romney, Psychometrika 53(1):71-92, 1988), which takes into account differences in competence between individuals. 121 participants viewed a simulated crime and subsequently answered 128 true/false questions about the crime. We experimentally generated groups of witnesses with homogeneous or heterogeneous competences. Both the majority rule and the general Condorcet model provided more accurate reconstructions of the observed crime than individual testimonies. The superiority of aggregated crime reconstructions involving multiple individual testimonies increased with an increasing number of witnesses. Crime reconstructions were most accurate when competences were heterogeneous and aggregation was based on the general Condorcet model. We argue that a formal aggregation should be considered more often when eyewitness testimonies have to be assessed and that the general Condorcet model provides a good framework for such aggregations.

  15. Research of the system response of neutron double scatter imaging for MLEM reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, M., E-mail: wyj2013@163.com [Northwest Institute of Nuclear Technology, Xi’an 710024 (China); State Key Laboratory of Intense Pulsed Radiation-Simulation and Effect, Xi’an 710024 (China); Peng, B.D.; Sheng, L.; Li, K.N.; Zhang, X.P.; Li, Y.; Li, B.K.; Yuan, Y.; Wang, P.W.; Zhang, X.D.; Li, C.H. [Northwest Institute of Nuclear Technology, Xi’an 710024 (China); State Key Laboratory of Intense Pulsed Radiation-Simulation and Effect, Xi’an 710024 (China)

    2015-03-01

    A Maximum Likelihood image reconstruction technique has been applied to neutron scatter imaging. The response function of the imaging system can be obtained by Monte Carlo simulation, which is very time-consuming if the number of image pixels and particles is large. In this work, to improve time efficiency, an analytical approach based on the probability of neutron interaction and transport in the detector is developed to calculate the system response function. The response function was applied to calculate the relative efficiency of the neutron scatter imaging system as a function of the incident neutron energy. The calculated results agreed with simulations by the MCNP5 software. Then the maximum likelihood expectation maximization (MLEM) reconstruction method with the system response function was used to reconstruct data simulated by Monte Carlo method. The results showed that there was good consistency between the reconstruction position and true position. Compared with back-projection reconstruction, the improvement in image quality was obvious, and the locations could be discerned easily for multiple radiation point sources.

  16. Reconstruction of epidemic curves for pandemic influenza A (H1N1 2009 at city and sub-city levels

    Directory of Open Access Journals (Sweden)

    Wong Ngai Sze

    2010-11-01

    Full Text Available Abstract To better describe the epidemiology of influenza at local level, the time course of pandemic influenza A (H1N1 2009 in the city of Hong Kong was reconstructed from notification data after decomposition procedure and time series analysis. GIS (geographic information system methodology was incorporated for assessing spatial variation. Between May and September 2009, a total of 24415 cases were successfully geocoded, out of 25473 (95.8% reports in the original dataset. The reconstructed epidemic curve was characterized by a small initial peak, a nadir followed by rapid rise to the ultimate plateau. The full course of the epidemic had lasted for about 6 months. Despite the small geographic area of only 1000 Km2, distinctive spatial variation was observed in the configuration of the curves across 6 geographic regions. With the relatively uniform physical and climatic environment within Hong Kong, the temporo-spatial variability of influenza spread could only be explained by the heterogeneous population structure and mobility patterns. Our study illustrated how an epidemic curve could be reconstructed using regularly collected surveillance data, which would be useful in informing intervention at local levels.

  17. Task-based statistical image reconstruction for high-quality cone-beam CT

    Science.gov (United States)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a

  18. Accelerating image reconstruction in dual-head PET system by GPU and symmetry properties.

    Directory of Open Access Journals (Sweden)

    Cheng-Ying Chou

    Full Text Available Positron emission tomography (PET is an important imaging modality in both clinical usage and research studies. We have developed a compact high-sensitivity PET system that consisted of two large-area panel PET detector heads, which produce more than 224 million lines of response and thus request dramatic computational demands. In this work, we employed a state-of-the-art graphics processing unit (GPU, NVIDIA Tesla C2070, to yield an efficient reconstruction process. Our approaches ingeniously integrate the distinguished features of the symmetry properties of the imaging system and GPU architectures, including block/warp/thread assignments and effective memory usage, to accelerate the computations for ordered subset expectation maximization (OSEM image reconstruction. The OSEM reconstruction algorithms were implemented employing both CPU-based and GPU-based codes, and their computational performance was quantitatively analyzed and compared. The results showed that the GPU-accelerated scheme can drastically reduce the reconstruction time and thus can largely expand the applicability of the dual-head PET system.

  19. Edge Artifacts in Point Spread Function-based PET Reconstruction in Relation to Object Size and Reconstruction Parameters

    Directory of Open Access Journals (Sweden)

    Yuji Tsutsui

    2017-06-01

    Full Text Available Objective(s: We evaluated edge artifacts in relation to phantom diameter and reconstruction parameters in point spread function (PSF-based positron emission tomography (PET image reconstruction.Methods: PET data were acquired from an original cone-shaped phantom filled with 18F solution (21.9 kBq/mL for 10 min using a Biograph mCT scanner. The images were reconstructed using the baseline ordered subsets expectation maximization (OSEM algorithm and the OSEM with PSF correction model. The reconstruction parameters included a pixel size of 1.0, 2.0, or 3.0 mm, 1-12 iterations, 24 subsets, and a full width at half maximum (FWHM of the post-filter Gaussian filter of 1.0, 2.0, or 3.0 mm. We compared both the maximum recovery coefficient (RCmax and the mean recovery coefficient (RCmean in the phantom at different diameters.Results: The OSEM images had no edge artifacts, but the OSEM with PSF images had a dense edge delineating the hot phantom at diameters 10 mm or more and a dense spot at the center at diameters of 8 mm or less. The dense edge was clearly observed on images with a small pixel size, a Gaussian filter with a small FWHM, and a high number of iterations. At a phantom diameter of 6-7 mm, the RCmax for the OSEM and OSEM with PSF images was 60% and 140%, respectively (pixel size: 1.0 mm; FWHM of the Gaussian filter: 2.0 mm; iterations: 2. The RCmean of the OSEM with PSF images did not exceed 100%.Conclusion: PSF-based image reconstruction resulted in edge artifacts, the degree of which depends on the pixel size, number of iterations, FWHM of the Gaussian filter, and object size.

  20. Posterior column reconstruction improves fusion rates at the level of osteotomy in three-column posterior-based osteotomies.

    Science.gov (United States)

    Lewis, Stephen J; Mohanty, Chandan; Gazendam, Aaron M; Kato, So; Keshen, Sam G; Lewis, Noah D; Magana, Sofia P; Perlmutter, David; Cape, Jennifer

    2018-03-01

    To determine the incidence of pseudarthrosis at the osteotomy site after three-column spinal osteotomies (3-COs) with posterior column reconstruction. 82 consecutive adult 3-COs (66 patients) with a minimum of 2-year follow-up were retrospectively reviewed. All cases underwent posterior 3-COs with two-rod constructs. The inferior facets of the proximal level were reduced to the superior facets of the distal level. If that was not possible, a structural piece of bone graft either from the local resection or a local rib was slotted in the posterior column defect to re-establish continual structural posterior bone across the lateral margins of the resection. No interbody cages were used at the level of the osteotomy. There were 34 thoracic osteotomies, 47 lumbar osteotomies and one sacral osteotomy with a mean follow-up of 52 (24-126) months. All cases underwent posterior column reconstructions described above and the addition of interbody support or additional posterior rods was not performed for fusion at the osteotomy level. Among them, 29 patients underwent one or more revision surgeries. There were three definite cases of pseudarthrosis at the osteotomy site (4%). Six revisions were also performed for pseudarthrosis at other levels. Restoration of the structural integrity of the posterior column in three-column posterior-based osteotomies was associated with > 95% fusion rate at the level of the osteotomy. Pseudarthrosis at other levels was the second most common reason for revision following adjacent segment disease in the long-term follow-up.

  1. Rapid Hamstrings/Quadriceps strength in ACL-reconstructed elite alpine ski racers

    DEFF Research Database (Denmark)

    Jordan, Matthew J; Aagaard, Per; Herzog, Walter

    2015-01-01

    PURPOSE: Due to the importance of hamstrings (HAM) and quadriceps (QUAD) strength for anterior cruciate ligament (ACL) injury prevention, and the high incidence of ACL injury in ski racing, HAM and QUAD maximal and explosive strength was assessed in ski racers with and without ACL reconstruction...... (ACL-R). METHODS: Uninjured (n=13 males; n=8 females) and ACL-R (n=3 males; n=5 females; 25.0±11.3 months post-op) elite ski racers performed maximal voluntary isometric HAM and QUAD contractions to obtain maximal torque (MVC) and rate of torque development (RTD) at 0-50, 0-100, 0-150 and 0-200 ms. MVC...... and RTD (per kg body mass) were calculated for the uninjured group to compare between sexes, and to compare the control group with the ACL-R limb and unaffected limb of the ACL-R skiers. H/Q MVC and RTD strength ratios were also compared RESULTS: The ACL-R limb demonstrated significant HAM and QUAD...

  2. Patients' Aesthetic Concerns After Horizontally Placed Abdominal Free Flap Breast Reconstruction.

    Science.gov (United States)

    Kim, Eun Key; Suh, Young Chul; Maldonado, Andrés A; Yun, Jiyoung; Lee, Taik Jong

    2015-10-01

    The present study aimed to analyze patients' aesthetic concerns after breast reconstruction with abdominal free flap by reporting secondary cosmetic procedures performed based on the patients' request, and analyzed the effect of adjuvant therapies and other variables on such outcomes. All patients who underwent unilateral immediate reconstruction were enrolled prospectively. Free abdominal flaps were placed horizontally with little manipulation. Secondary procedures were actively recommended during the follow-up period to meet the patients' aesthetic concerns. The numbers and types of the secondary procedures and the effects of various factors were analyzed. 150 patients met the eligibility criteria. The average number of overall secondary surgeries per patient was 1.25. Patients with skin-sparing mastectomy required significantly higher number of secondary surgeries compared with those who underwent nipple-areolar skin-sparing mastectomy. When confined to the cosmetic procedures, 58 (38.7 %) patients underwent 75 operations. The most common procedures were flank dog ear revision, fat injection of the reconstructed breast, and breast liposuction. None of the radiated patients underwent liposuction of the flap. Most commonly liposuctioned regions were the central-lateral and lower-lateral, while fat was most commonly injected to the upper-medial and upper-central part of the breast. The present study delineated the numbers and types of the secondary operations after horizontally placed abdominal free flap transfer with analysis of the influence of various factors. Addressing such issues during the primary reconstruction would help to reduce the need and extent of the secondary operations and to maximize aesthetic outcome. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  3. SU-D-201-05: Phantom Study to Determine Optimal PET Reconstruction Parameters for PET/MR Imaging of Y-90 Microspheres Following Radioembolization

    Energy Technology Data Exchange (ETDEWEB)

    Maughan, N [Washington University in Saint Louis, Saint Louis, MO (United States); Conti, M [Siemens Healthcare Molecular Imaging, Knoxville, TN (United States); Parikh, P [Washington Univ. School of Medicine, Saint Louis, MO (United States); Faul, D [Siemens Healthcare, New York, NY (United States); Laforest, R [Washington University School of Medicine, Saint Louis, MO (United States)

    2015-06-15

    Purpose: Imaging Y-90 microspheres with PET/MRI following hepatic radioembolization has the potential for predicting treatment outcome and, in turn, improving patient care. The positron decay branching ratio, however, is very small (32 ppm), yielding images with poor statistics even when therapy doses are used. Our purpose is to find PET reconstruction parameters that maximize the PET recovery coefficients and minimize noise. Methods: An initial 7.5 GBq of Y-90 chloride solution was used to fill an ACR phantom for measurements with a PET/MRI scanner (Siemens Biograph mMR). Four hot cylinders and a warm background activity volume of the phantom were filled with a 10:1 ratio. Phantom attenuation maps were derived from scaled CT images of the phantom and included the MR phased array coil. The phantom was imaged at six time points between 7.5–1.0 GBq total activity over a period of eight days. PET images were reconstructed via OP-OSEM with 21 subsets and varying iteration number (1–5), post-reconstruction filter size (5–10 mm), and either absolute or relative scatter correction. Recovery coefficients, SNR, and noise were measured as well as total activity in the phantom. Results: For the 120 different reconstructions, recovery coefficients ranged from 0.1–0.6 and improved with increasing iteration number and reduced post-reconstruction filter size. SNR, however, improved substantially with lower iteration numbers and larger post-reconstruction filters. From the phantom data, we found that performing 2 iterations, 21 subsets, and applying a 5 mm Gaussian post-reconstruction filter provided optimal recovery coefficients at a moderate noise level for a wide range of activity levels. Conclusion: The choice of reconstruction parameters for Y-90 PET images greatly influences both the accuracy of measurements and image quality. We have found reconstruction parameters that provide optimal recovery coefficients with minimized noise. Future work will include the effects

  4. Analysis of limb function after various reconstruction methods according to tumor location following resection of pediatric malignant bone tumors

    Directory of Open Access Journals (Sweden)

    Tokuhashi Yasuaki

    2010-05-01

    Full Text Available Abstract Background In the reconstruction of the affected limb in pediatric malignant bone tumors, since the loss of joint function affects limb-length discrepancy expected in the future, reconstruction methods that not only maximally preserve the joint function but also maintain good limb function are necessary. We analysis limb function of reconstruction methods by tumor location following resection of pediatric malignant bone tumors. Patients and methods We classified the tumors according to their location into 3 types by preoperative MRI, and evaluated reconstruction methods after wide resection, paying attention to whether the joint function could be preserved. The mean age of the patients was 10.6 years, Osteosarcoma was observed in 26 patients, Ewing's sarcoma in 3, and PNET(primitive neuroectodermal tumor and chondrosarcoma (grade 1 in 1 each. Results Type I were those located in the diaphysis, and reconstruction was performed using a vascularized fibular graft(vascularized fibular graft. Type 2 were those located in contact with the epiphyseal line or within 1 cm from this line, and VFG was performed in 1, and distraction osteogenesis in 1. Type III were those extending from the diaphysis to the epiphysis beyond the epiphyseal line, and a Growing Kotz was mainly used in 10 patients. The mean functional assessment score was the highest for Type I (96%: n = 4 according to the type and for VFG (99% according to the reconstruction method. Conclusion The final functional results were the most satisfactory for Types I and II according to tumor location. Biological reconstruction such as VFG and distraction osteogenesis without a prosthesis are so high score in the MSTS rating system. Therefore, considering the function of the affected limb, a limb reconstruction method allowing the maximal preservation of joint function should be selected after careful evaluation of the effects of chemotherapy and the location of the tumor.

  5. Scattering amplitudes over finite fields and multivariate functional reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Peraro, Tiziano [Higgs Centre for Theoretical Physics,School of Physics and Astronomy, The University of Edinburgh,James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD (United Kingdom)

    2016-12-07

    Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.

  6. Scattering amplitudes over finite fields and multivariate functional reconstruction

    International Nuclear Information System (INIS)

    Peraro, Tiziano

    2016-01-01

    Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.

  7. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    International Nuclear Information System (INIS)

    Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K

    2015-01-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)

  8. Oral function after maxillectomy and reconstruction with an obturator.

    Science.gov (United States)

    Kreeft, A M; Krap, M; Wismeijer, D; Speksnijder, C M; Smeele, L E; Bosch, S D; Muijen, M S A; Balm, A J M

    2012-11-01

    Maxillectomy defects can be reconstructed by a prosthetic obturator or (free) flap transfer, but there is no consensus about the optimal method. This study evaluated 32 maxillectomy patients with prosthetic obturation regarding function (mastication, subjective oral and swallowing complaints and maximal mouth opening). Outcomes were related to the extent of the resection (Brown maxillectomy classification), dentition and history of adjuvant radiotherapy. Maxillectomy defects ranged from 2-1 to 4B on the Brown classification, and most had a defect graded as 2-A or 2-B. Mean mixing ability test after 10 chewing strokes was 24.2 and after 20 chewing strokes 19.7, which compares to edentulous healthy individuals. None of the outcomes was influenced by Brown classification. Radiotherapy negatively influenced mean maximal mouth opening (29.1mm versus 40.9 mm, p=0.017) and subjective outcomes. Edentate obturated patients had worse outcomes than dentate patients, measured by mixing ability test and questionnaire. In conclusion, mastication after obturator reconstruction of a maxillectomy defect is comparable to mastication with full dentures. Size of the maxillectomy defect did not significantly influence functional outcome, but adjuvant radiotherapy resulted in worse mouth opening and self-reported oral and swallowing problems. Residual dentition had a positive influence on mastication and subjective outcomes. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. The Functional Impact of Breast Reconstruction: An Overview and Update

    Directory of Open Access Journals (Sweden)

    Jonas A. Nelson, MD

    2018-03-01

    Full Text Available As rates of bilateral mastectomy and immediate reconstruction rise, the aesthetic and psychosocial benefits of breast reconstruction are increasingly well understood. However, an understanding of functional outcome and its optimization is still lacking. This endpoint is critical to maximizing postoperative quality of life. All reconstructive modalities have possible functional consequences. Studies demonstrate that implant-based reconstruction impacts subjective movement, but patients’ day-to-day function may not be objectively hindered despite self-reported disability. For latissimus dorsi flap reconstruction, patients also report some dysfunction at the donor site, but this does not seem to result in significant, long-lasting limitation of daily activity. Athletic and other vigorous activities are most affected. For abdominal free flaps, patient perception of postoperative disability is generally not significant, despite the varying degrees of objective disadvantage that have been identified depending on the extent of rectus muscle sacrifice. With these functional repercussions in mind, a broader perspective on the attempt to ensure minimal functional decline after breast surgery should focus not only on surgical technique but also on postoperative rehabilitation. Early directed physical therapy may be an instrumental element in facilitating return to baseline function. With the patient’s optimal quality of life as an overarching objective, a multifaceted approach to functional preservation may be the answer to this continued challenge. This review will examine these issues in depth in an effort to better understand postoperative functional outcomes with a focus on the younger, active breast reconstruction patient.

  10. Sudomotor Function as a Tool for Cardiorespiratory Fitness Level Evaluation: Comparison with Maximal Exercise Capacity

    Directory of Open Access Journals (Sweden)

    Anu Raisanen

    2014-05-01

    Full Text Available Physical inactivity is a modifiable risk factor for cardiovascular (CV and metabolic disorders. VO2max is the best method to assess cardio-respiratory fitness level but it is poorly adopted in clinical practice. Sudomotor dysfunction may develop early in metabolic diseases. This study aimed at comparing established CV risk evaluation techniques with SUDOSCAN; a quick and non-invasive method to assess sudomotor function. A questionnaire was filled-in; physical examination and VO2max estimation using a maximal test on a bicycle ergometer were performed on active Finish workers. Hand and foot electrochemical skin conductance (ESC were measured to assess sudomotor function. Subjects with the lowest fitness level were involved in a 12 month training program with recording of their weekly physical activity and a final fitness level evaluation. Significant differences in BMI; waist and body fat were seen according to SUDOSCAN risk score classification. Correlation between the risk score and estimated VO2max was r = −0.57, p < 0.0001 for women and −0.48, p < 0.0001 for men. A significant increase in estimated VO2max, in hand and foot ESC and in risk score was observed after lifestyle intervention and was more important in subjects with the highest weekly activity. SUDOSCAN could be used to assess cardio-metabolic disease risk status in a working population and to follow individual lifestyle interventions.

  11. A modified discrete algebraic reconstruction technique for multiple grey image reconstruction for limited angle range tomography.

    Science.gov (United States)

    Liang, Zhiting; Guan, Yong; Liu, Gang; Chen, Xiangyu; Li, Fahu; Guo, Pengfei; Tian, Yangchao

    2016-03-01

    The `missing wedge', which is due to a restricted rotation range, is a major challenge for quantitative analysis of an object using tomography. With prior knowledge of the grey levels, the discrete algebraic reconstruction technique (DART) is able to reconstruct objects accurately with projections in a limited angle range. However, the quality of the reconstructions declines as the number of grey levels increases. In this paper, a modified DART (MDART) was proposed, in which each independent region of homogeneous material was chosen as a research object, instead of the grey values. The grey values of each discrete region were estimated according to the solution of the linear projection equations. The iterative process of boundary pixels updating and correcting the grey values of each region was executed alternately. Simulation experiments of binary phantoms as well as multiple grey phantoms show that MDART is capable of achieving high-quality reconstructions with projections in a limited angle range. The interesting advancement of MDART is that neither prior knowledge of the grey values nor the number of grey levels is necessary.

  12. [PRESSURE ULCER TREATMENT EXPERIENCE AT CLINICAL DEPARTMENT OF PLASTIC, RECONSTRUCTIVE AND AESTHETIC

    Science.gov (United States)

    Budi, S; Žic, R; Martić, K; Rudman, F; Vlajčić, Z; Milanović, R; Roje, Z; Munjiza, A; Rajković, I; Gorjanc, B; Held, R; Maletić, A; Tucaković, H; Stanec, Z

    2016-01-01

    Results of this clinical study on surgical treatment of pressure ulcers at Department of Plastic, Reconstructive and Aesthetic Surgery, Dubrava University Hospital showed that there was no difference between the 2011-2016 and 2003-2008 periods, indicating continuation of good surgical treatment planning and appropriate postoperative care. Despite the smaller number of hospitalized patients in the 2011-2016 period (31 patients and 42 reconstructive procedures), the number of reconstructive procedure was similar to the recent 2003-2008 period (47 patients and 57 reconstructive procedures). The best results of reconstruction of sacral region pressure ulcer were achieved with fasciocutaneous and musculocutaneous flaps. Whenever possible, depending on the extent of the defect, musculocutaneous flaps should be preferred for reconstruction. It is especially suitable for pressure ulcer recurrence. For ischial region reconstruction, good results can be obtained by mobilizing the semimembranosus and/or semitendinosus in defect gap. For trochanteric region, the tensor fascia lata flap is a good choice. For maximal functional and reconstructive results, a multidisciplinary approach in pressure ulcer treatment has the leading role in the modern concept of wound healing. Surgical treatment should always include radical debridement, ostectomy and well planned defect reconstruction. Conservative treatment should be support to surgical treatment with a focus on patient health care and high hygiene measures. In recent years (2011-2016), the usage of better conservative treatment led to reduction of patient hospital stay and surgical treatment of pressure ulcer. Further ‘wound care’ nurses training in Croatia can lead the trend towards advanced practice nursing in pressure ulcer prevention and conservative treatment.

  13. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    Science.gov (United States)

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  14. Live event reconstruction in an optically read out GEM-based TPC

    Science.gov (United States)

    Brunbauer, F. M.; Galgóczi, G.; Gonzalez Diaz, D.; Oliveri, E.; Resnati, F.; Ropelewski, L.; Streli, C.; Thuiner, P.; van Stenis, M.

    2018-04-01

    Combining strong signal amplification made possible by Gaseous Electron Multipliers (GEMs) with the high spatial resolution provided by optical readout, highly performing radiation detectors can be realized. An optically read out GEM-based Time Projection Chamber (TPC) is presented. The device permits 3D track reconstruction by combining the 2D projections obtained with a CCD camera with timing information from a photomultiplier tube. Owing to the intuitive 2D representation of the tracks in the images and to automated control, data acquisition and event reconstruction algorithms, the optically read out TPC permits live display of reconstructed tracks in three dimensions. An Ar/CF4 (80/20%) gas mixture was used to maximize scintillation yield in the visible wavelength region matching the quantum efficiency of the camera. The device is integrated in a UHV-grade vessel allowing for precise control of the gas composition and purity. Long term studies in sealed mode operation revealed a minor decrease in the scintillation light intensity.

  15. Maximal violation of Clauser-Horne-Shimony-Holt inequality for four-level systems

    International Nuclear Information System (INIS)

    Fu Libin; Chen Jingling; Chen Shigang

    2004-01-01

    Clauser-Horne-Shimony-Holt inequality for bipartite systems of four dimensions is studied in detail by employing the unbiased eight-port beam splitters measurements. The uniform formulas for the maximum and minimum values of this inequality for such measurements are obtained. Based on these formulas, we show that an optimal nonmaximally entangled state is about 6% more resistant to noise than the maximally entangled one. We also give the optimal state and the optimal angles which are important for experimental realization

  16. MAXIM: The Blackhole Imager

    Science.gov (United States)

    Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris

    2004-01-01

    The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.

  17. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  18. A Cross-Layer Approach for Maximizing Visual Entropy Using Closed-Loop Downlink MIMO

    Directory of Open Access Journals (Sweden)

    Hyungkeuk Lee

    2008-07-01

    Full Text Available We propose an adaptive video transmission scheme to achieve unequal error protection in a closed loop multiple input multiple output (MIMO system for wavelet-based video coding. In this scheme, visual entropy is employed as a video quality metric in agreement with the human visual system (HVS, and the associated visual weight is used to obtain a set of optimal powers in the MIMO system for maximizing the visual quality of the reconstructed video. For ease of cross-layer optimization, the video sequence is divided into several streams, and the visual importance of each stream is quantified using the visual weight. Moreover, an adaptive load balance control, named equal termination scheduling (ETS, is proposed to improve the throughput of visually important data with higher priority. An optimal solution for power allocation is derived as a closed form using a Lagrangian relaxation method. In the simulation results, a highly improved visual quality is demonstrated in the reconstructed video via the cross-layer approach by means of visual entropy.

  19. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  20. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  1. Reconstructing cone-beam CT with spatially varying qualities for adaptive radiotherapy: a proof-of-principle study.

    Science.gov (United States)

    Lu, Wenting; Yan, Hao; Gu, Xuejun; Tian, Zhen; Luo, Ouyang; Yang, Liu; Zhou, Linghong; Cervino, Laura; Wang, Jing; Jiang, Steve; Jia, Xun

    2014-10-21

    With the aim of maximally reducing imaging dose while meeting requirements for adaptive radiation therapy (ART), we propose in this paper a new cone beam CT (CBCT) acquisition and reconstruction method that delivers images with a low noise level inside a region of interest (ROI) and a relatively high noise level outside the ROI. The acquired projection images include two groups: densely sampled projections at a low exposure with a large field of view (FOV) and sparsely sampled projections at a high exposure with a small FOV corresponding to the ROI. A new algorithm combining the conventional filtered back-projection algorithm and the tight-frame iterative reconstruction algorithm is also designed to reconstruct the CBCT based on these projection data. We have validated our method on a simulated head-and-neck (HN) patient case, a semi-real experiment conducted on a HN cancer patient under a full-fan scan mode, as well as a Catphan phantom under a half-fan scan mode. Relative root-mean-square errors (RRMSEs) of less than 3% for the entire image and ~1% within the ROI compared to the ground truth have been observed. These numbers demonstrate the ability of our proposed method to reconstruct high-quality images inside the ROI. As for the part outside ROI, although the images are relatively noisy, it can still provide sufficient information for radiation dose calculations in ART. Dose distributions calculated on our CBCT image and on a standard CBCT image are in agreement, with a mean relative difference of 0.082% inside the ROI and 0.038% outside the ROI. Compared with the standard clinical CBCT scheme, an imaging dose reduction of approximately 3-6 times inside the ROI was achieved, as well as an 8 times outside the ROI. Regarding computational efficiency, it takes 1-3 min to reconstruct a CBCT image depending on the number of projections used. These results indicate that the proposed method has the potential for application in ART.

  2. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  3. The fusion of craniofacial reconstruction and microsurgery: a functional and aesthetic approach.

    Science.gov (United States)

    Broyles, Justin M; Abt, Nicholas B; Shridharani, Sachin M; Bojovic, Branko; Rodriguez, Eduardo D; Dorafshar, Amir H

    2014-10-01

    Reconstruction of large, composite defects in the craniofacial region has evolved significantly over the past half century. During this time, there have been significant advances in craniofacial and microsurgical surgery. These contributions have often been in parallel; however, over the past 10 years, these two disciplines have begun to overlap more frequently, and the techniques of one have been used to advance the other. In the current review, the authors aim to describe the available options for free tissue reconstruction in craniofacial surgery. A review of microsurgical reconstructive options of aesthetic units within the craniofacial region was undertaken with attention directed toward surgeon flap preference. Anatomical areas analyzed included scalp, calvaria, forehead, frontal sinus, nose, maxilla and midface, periorbita, mandible, lip, and tongue. Although certain flaps such as the ulnar forearm flap and lateral circumflex femoral artery-based flaps were used in multiple reconstructive sites, each anatomical location possesses a unique array of flaps to maximize outcomes. Craniofacial surgery, like plastic surgery, has made tremendous advancements in the past 40 years. With innovations in technology, flap design, and training, microsurgery has become safer, faster, and more commonplace than at any time in history. Reconstructive microsurgery allows the surgeon to be creative in this approach, and free tissue transfer has become a mainstay of modern craniofacial reconstruction.

  4. Extract of Zanthoxylum bungeanum maxim seed oil reduces ...

    African Journals Online (AJOL)

    Purpose: To investigate the anti-hyperlipidaemic effect of extract of Zanthoxylum bungeanum Maxim. seed oil (EZSO) on high-fat diet (HFD)-induced hyperlipidemic hamsters. Methods: Following feeding with HFD for 30 days, hyperlipidemic hamsters were intragastrically treated with EZSO for 60 days. Serum levels of ...

  5. Design of optimal linear antennas with maximally flat radiation patterns

    Science.gov (United States)

    Minkovich, B. M.; Mints, M. Ia.

    1990-02-01

    The paper presents an explicit solution to the problem of maximizing the aperture area utilization coefficient and obtaining the best approximation in the mean of the sectorial U-shaped radiation pattern of a linear antenna, when Butterworth flattening constraints are imposed on the approximating pattern. Constraints are established on the choice of the smallest and large antenna dimensions that make it possible to obtain maximally flat patterns, having a low sidelobe level and free from pulsations within the main lobe.

  6. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  7. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  8. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  9. Pain level after ACL reconstruction: A comparative study between free quadriceps tendon and hamstring tendons autografts.

    Science.gov (United States)

    Buescu, Cristian Tudor; Onutu, Adela Hilda; Lucaciu, Dan Osvald; Todor, Adrian

    2017-03-01

    The objective of this study was to compare the pain levels and analgesic consumption after single bundle ACL reconstruction with free quadriceps tendon autograft versus hamstring tendon autograft. A total of 48 patients scheduled for anatomic single-bundle ACL reconstruction were randomized into two groups: the free quadriceps tendon autograft group (24 patients) and the hamstring tendons autograft group (24 patients). A basic multimodal analgesic postoperative program was used for all patients and rescue analgesia was provided with tramadol, at pain scores over 30 on the Visual Analog Scale. The time to the first rescue analgesic, the number of doses of tramadol and pain scores were recorded. The results within the same group were compared with the Wilcoxon signed test. Supplementary analgesic drug administration proved significantly higher in the group of subjects with hamstring grafts, with a median (interquartile range) of 1 (1.3) dose, compared to the group of subjects treated with a quadriceps graft, median = 0.5 (0.1.25) (p = 0.009). A significantly higher number of subjects with a quadriceps graft did not require any supplementary analgesic drug (50%) as compared with subjects with hamstring graft (13%; Z-statistics = 3.01, p = 0.002). The percentage of subjects who required a supplementary analgesic drug was 38% higher in the HT group compared with the FQT group. The use of the free quadriceps tendon autograft for ACL reconstruction leads to less pain and analgesic consumption in the immediate postoperative period compared with the use of hamstrings autograft. Level I Therapeutic study. Copyright © 2017 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  10. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  11. Formation Control for the MAXIM Mission

    Science.gov (United States)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  12. A New Look at the Impact of Maximizing on Unhappiness: Two Competing Mediating Effects

    Directory of Open Access Journals (Sweden)

    Jiaxi Peng

    2018-02-01

    Full Text Available The current study aims to explore how the decision-making style of maximizing affects subjective well-being (SWB, which mainly focuses on the confirmation of the mediator role of regret and suppressing role of achievement motivation. A total of 402 Chinese undergraduate students participated in this study, in which they responded to the maximization, regret, and achievement motivation scales and SWB measures. Results suggested that maximizing significantly predicted SWB. Moreover, regret and achievement motivation (hope for success dimension could completely mediate and suppress this effect. That is, two competing indirect pathways exist between maximizing and SWB. One pathway is through regret. Maximizing typically leads one to regret, which could negatively predict SWB. Alternatively, maximizing could lead to high levels of hope for success, which were positively correlated with SWB. Findings offered a complex method of thinking about the relationship between maximizing and SWB.

  13. Influence of image reconstruction methods on statistical parametric mapping of brain PET images

    International Nuclear Information System (INIS)

    Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan

    2007-01-01

    Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)

  14. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  15. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  16. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  17. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    AUTHOR|(CDS)2067159

    2016-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  18. Track reconstruction in CMS high luminosity environment

    CERN Document Server

    Goetzmann, Christophe

    2014-01-01

    The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...

  19. 3D road marking reconstruction from street-level calibrated stereo pairs

    Science.gov (United States)

    Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier

    This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.

  20. On-line implant reconstruction in HDR brachytherapy

    International Nuclear Information System (INIS)

    Kolkman-Deurloo, Inger-Karine K.; Kruijf, Wilhelmus J.M. de; Levendag, Peter C.

    2006-01-01

    Background and purpose: To evaluate the accuracy of on-line planning in an Integrated Brachytherapy Unit (IBU) using dedicated image distortion correction algorithms, correcting the geometric distortion and magnetic distortion separately, and to determine the effect of the reconstruction accuracy on clinical treatment plans in terms of deviations in treatment time and dose. Patients and methods: The reconstruction accuracy has been measured using 20 markers, positioned at well known locations in a QA phantom. Treatment plans of two phantoms representing clinical implant geometries, have been compared with reference plans to determine the effect of the reconstruction accuracy on the treatment plan. Before clinical introduction, treatment plans of three representative patients, based on on-line reconstruction, have been compared with reference plans. Results: The average reconstruction error for 10 in. images reduces from -0.6 mm (range -2.6 to +1.0 mm) to -0.2 mm (range -1.2 to +0.6 mm) after image distortion correction and for 15 in. images from 0.8 mm (range -0.5 to +3.0 mm) to 0.0 mm (range -0.8 to +0.8 mm). The error in case of eccentric positioning of the phantom, i.e. 0.8 mm (range -1.0 to +3.3 mm), reduces to 0.1 mm (range -0.5 to +0.9 mm). Correction of the image distortions reduces the deviation in the calculated treatment time of maximally 2.7% to less than 0.8% in case of eccentrically positioned clinical phantoms. The deviation in the treatment time or reference dose in the plans based on on-line reconstruction with image distortion correction of the three patient examples is smaller than 0.3%. Conclusions: Accurate on-line implant reconstruction using the IBU localiser and dedicated correction algorithms separating the geometric distortion and the magnetic distortion is possible. The results fulfill the minimum requirements as imposed by the Netherlands Commission on Radiation Dosimetry (NCS) without limitations regarding the usable range of the field

  1. REGEN: Ancestral Genome Reconstruction for Bacteria.

    Science.gov (United States)

    Yang, Kuan; Heath, Lenwood S; Setubal, João C

    2012-07-18

    Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deletion and replicon fission and fusion. The reconstruction can be performed by either a maximum parsimony or a maximum likelihood method. Gene content reconstruction is based on the concept of neighboring gene pairs. REGEN was designed to be used with any set of genomes that are sufficiently related, which will usually be the case for bacteria within the same taxonomic order. We evaluated REGEN using simulated genomes and genomes in the Rhizobiales order.

  2. Growth-Maximizing Public Debt under Changing Demographics

    DEFF Research Database (Denmark)

    Bokan, Nikola; Hougaard Jensen, Svend E.; Hallett, Andrew Hughes

    2016-01-01

    This paper develops an overlapping-generations model to study the growth-maximizing level of public debt under conditions of demograhic change. It is shown that the optimal debt level depends on a positive marginal productivity of public capital. In general, it also depends on the demographic par...... will have to adjust its fiscal plans to accommodate those changes, most likely downward, if growth is to be preserved. An advantage of this model is that it allows us to determine in advance the way in which fiscal policies need to adjust as demographic parameters change....

  3. Determining the amount of rumen-protected methionine supplement that corresponds to the optimal levels of methionine in metabolizable protein for maximizing milk protein production and profit on dairy farms.

    Science.gov (United States)

    Cho, J; Overton, T R; Schwab, C G; Tauer, L W

    2007-10-01

    The profitability of feeding rumen-protected Met (RPMet) sources to produce milk protein was estimated using a 2-step procedure: First, the effect of Met in metabolizable protein (MP) on milk protein production was estimated by using a quadratic Box-Cox functional form. Then, using these estimation results, the amounts of RPMet supplement that corresponded to the optimal levels of Met in MP for maximizing milk protein production and profit on dairy farms were determined. The data used in this study were modified from data used to determine the optimal level of Met in MP for lactating cows in the Nutrient Requirements of Dairy Cattle (NRC, 2001). The data used in this study differ from that in the NRC (2001) data in 2 ways. First, because dairy feed generally contains 1.80 to 1.90% Met in MP, this study adjusts the reference production value (RPV) from 2.06 to 1.80 or 1.90%. Consequently, the milk protein production response is also modified to an RPV of 1.80 or 1.90% Met in MP. Second, because this study is especially interested in how much additional Met, beyond the 1.80 or 1.90% already contained in the basal diet, is required to maximize farm profits, the data used are limited to concentrations of Met in MP above 1.80 or 1.90%. This allowed us to calculate any additional cost to farmers based solely on the price of an RPMet supplement and eliminated the need to estimate the dollar value of each gram of Met already contained in the basal diet. Results indicated that the optimal level of Met in MP for maximizing milk protein production was 2.40 and 2.42%, where the RPV was 1.80 and 1.90%, respectively. These optimal levels were almost identical to the recommended level of Met in MP of 2.40% in the NRC (2001). The amounts of RPMet required to increase the percentage of Met in MP from each RPV to 2.40 and 2.42% were 21.6 and 18.5 g/d, respectively. On the other hand, the optimal levels of Met in MP for maximizing profit were 2.32 and 2.34%, respectively. The amounts

  4. Lake Level Reconstructions

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Records of past lake levels, mostly related to changes in moisture balance (evaporation-precipitation). Parameter keywords describe what was measured in this data...

  5. DD4Hep based event reconstruction

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Frank, Markus; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos

    The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...

  6. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  7. Dynamical generation of maximally entangled states in two identical cavities

    International Nuclear Information System (INIS)

    Alexanian, Moorad

    2011-01-01

    The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.

  8. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  9. Oblique reconstructions in tomosynthesis. II. Super-resolution

    International Nuclear Information System (INIS)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes.Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system.Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest

  10. Oblique reconstructions in tomosynthesis. II. Super-resolution

    Science.gov (United States)

    Acciavatti, Raymond J.; Maidment, Andrew D. A.

    2013-01-01

    Purpose: In tomosynthesis, super-resolution has been demonstrated using reconstruction planes parallel to the detector. Super-resolution allows for subpixel resolution relative to the detector. The purpose of this work is to develop an analytical model that generalizes super-resolution to oblique reconstruction planes. Methods: In a digital tomosynthesis system, a sinusoidal test object is modeled along oblique angles (i.e., “pitches”) relative to the plane of the detector in a 3D divergent-beam acquisition geometry. To investigate the potential for super-resolution, the input frequency is specified to be greater than the alias frequency of the detector. Reconstructions are evaluated in an oblique plane along the extent of the object using simple backprojection (SBP) and filtered backprojection (FBP). By comparing the amplitude of the reconstruction against the attenuation coefficient of the object at various frequencies, the modulation transfer function (MTF) is calculated to determine whether modulation is within detectable limits for super-resolution. For experimental validation of super-resolution, a goniometry stand was used to orient a bar pattern phantom along various pitches relative to the breast support in a commercial digital breast tomosynthesis system. Results: Using theoretical modeling, it is shown that a single projection image cannot resolve a sine input whose frequency exceeds the detector alias frequency. The high frequency input is correctly visualized in SBP or FBP reconstruction using a slice along the pitch of the object. The Fourier transform of this reconstructed slice is maximized at the input frequency as proof that the object is resolved. Consistent with the theoretical results, experimental images of a bar pattern phantom showed super-resolution in oblique reconstructions. At various pitches, the highest frequency with detectable modulation was determined by visual inspection of the bar patterns. The dependency of the highest

  11. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    Science.gov (United States)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  12. Improvements of the ALICE high level trigger for LHC Run 2 to facilitate online reconstruction, QA, and calibration

    Energy Technology Data Exchange (ETDEWEB)

    Rohr, David [Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    ALICE is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. Its main goal is the study of matter under extreme pressure and temperature as produced in heavy ion collisions at LHC. The ALICE High Level Trigger (HLT) is an online compute farm of around 200 nodes that performs a real time event reconstruction of the data delivered by the ALICE detectors. The HLT employs a fast FPGA based cluster finder algorithm as well as a GPU based track reconstruction algorithm and it is designed to process the maximum data rate expected from the ALICE detectors in real time. We present new features of the HLT for LHC Run 2 that started in 2015. A new fast standalone track reconstruction algorithm for the Inner Tracking System (ITS) enables the HLT to compute and report to LHC the luminous region of the interactions in real time. We employ a new dynamically reconfigurable histogram component that allows the visualization of characteristics of the online reconstruction using the full set of events measured by the detectors. This improves our monitoring and QA capabilities. During Run 2, we plan to deploy online calibration, starting with the calibration of the TPC (Time Projection Chamber) detector's drift time. First proof of concept tests were successfully performed using data-replay on our development cluster and during the heavy ion period at the end of 2015.

  13. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    Science.gov (United States)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  14. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  15. REGEN: Ancestral Genome Reconstruction for Bacteria

    Directory of Open Access Journals (Sweden)

    João C. Setubal

    2012-07-01

    Full Text Available Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deletion and replicon fission and fusion. The reconstruction can be performed by either a maximum parsimony or a maximum likelihood method. Gene content reconstruction is based on the concept of neighboring gene pairs. REGEN was designed to be used with any set of genomes that are sufficiently related, which will usually be the case for bacteria within the same taxonomic order. We evaluated REGEN using simulated genomes and genomes in the Rhizobiales order.

  16. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  17. LOR-OSEM: statistical PET reconstruction from raw line-of-response histograms

    International Nuclear Information System (INIS)

    Kadrmas, Dan J

    2004-01-01

    Iterative statistical reconstruction methods are becoming the standard in positron emission tomography (PET). Conventional maximum-likelihood expectation-maximization (MLEM) and ordered-subsets (OSEM) algorithms act on data which have been pre-processed into corrected, evenly-spaced histograms; however, such pre-processing corrupts the Poisson statistics. Recent advances have incorporated attenuation, scatter and randoms compensation into the iterative reconstruction. The objective of this work was to incorporate the remaining pre-processing steps, including arc correction, to reconstruct directly from raw unevenly-spaced line-of-response (LOR) histograms. This exactly preserves Poisson statistics and full spatial information in a manner closely related to listmode ML, making full use of the ML statistical model. The LOR-OSEM algorithm was implemented using a rotation-based projector which maps directly to the unevenly-spaced LOR grid. Simulation and phantom experiments were performed to characterize resolution, contrast and noise properties for 2D PET. LOR-OSEM provided a beneficial noise-resolution tradeoff, outperforming AW-OSEM by about the same margin that AW-OSEM outperformed pre-corrected OSEM. The relationship between LOR-ML and listmode ML algorithms was explored, and implementation differences are discussed. LOR-OSEM is a viable alternative to AW-OSEM for histogram-based reconstruction with improved spatial resolution and noise properties

  18. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    International Nuclear Information System (INIS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-01-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical

  19. Fast gradient-based methods for Bayesian reconstruction of transmission and emission PET images

    International Nuclear Information System (INIS)

    Mumcuglu, E.U.; Leahy, R.; Zhou, Z.; Cherry, S.R.

    1994-01-01

    The authors describe conjugate gradient algorithms for reconstruction of transmission and emission PET images. The reconstructions are based on a Bayesian formulation, where the data are modeled as a collection of independent Poisson random variables and the image is modeled using a Markov random field. A conjugate gradient algorithm is used to compute a maximum a posteriori (MAP) estimate of the image by maximizing over the posterior density. To ensure nonnegativity of the solution, a penalty function is used to convert the problem to one of unconstrained optimization. Preconditioners are used to enhance convergence rates. These methods generally achieve effective convergence in 15--25 iterations. Reconstructions are presented of an 18 FDG whole body scan from data collected using a Siemens/CTI ECAT931 whole body system. These results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors

  20. How Managerial Ownership Affects Profit Maximization in Newspaper Firms.

    Science.gov (United States)

    Busterna, John C.

    1989-01-01

    Explores whether different levels of a manager's ownership of a newspaper affects the manager's profit maximizing attitudes and behavior. Finds that owner-managers tend to place less emphasis on profits than non-owner-controlled newspapers, contrary to economic theory and empirical evidence from other industries. (RS)

  1. Optimization of hybrid iterative reconstruction level and evaluation of image quality and radiation dose for pediatric cardiac computed tomography angiography

    International Nuclear Information System (INIS)

    Yang, Lin; Liang, Changhong; Zhuang, Jian; Huang, Meiping; Liu, Hui

    2017-01-01

    Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose"4 and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose"4 levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose"4-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose"4 level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose"4 level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose"4 level 4 was optimal for both the full- and half-dose groups. Protocols with iDose"4 level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)

  2. Degree Associated Edge Reconstruction Number of Graphs with Regular Pruned Graph

    Directory of Open Access Journals (Sweden)

    P. Anusha Devi

    2015-10-01

    Full Text Available An ecard of a graph $G$ is a subgraph formed by deleting an edge. A da-ecard specifies the degree of the deleted edge along with the ecard. The degree associated edge reconstruction number of a graph $G,~dern(G,$ is the minimum number of da-ecards that uniquely determines $G.$  The adversary degree associated edge reconstruction number of a graph $G, adern(G,$ is the minimum number $k$ such that every collection of $k$ da-ecards of $G$ uniquely determines $G.$ The maximal subgraph without end vertices of a graph $G$ which is not a tree is the pruned graph of $G.$ It is shown that $dern$ of complete multipartite graphs and some connected graphs with regular pruned graph is $1$ or $2.$ We also determine $dern$ and $adern$ of corona product of standard graphs.

  3. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  4. Increasing the maximally random jammed density with electric field to reduce the fat level in chocolate

    Science.gov (United States)

    Tao, R.; Tang, H.

    Chocolate is one of the most popular food types and flavors in the world. Unfortunately, at present, chocolate products contain too much fat, leading to obesity. For example, a typical molding chocolate has various fat up to 40% in total and chocolate for covering ice cream has fat 50 -60%. Especially, as children are the leading chocolate consumers, reducing the fat level in chocolate products to make them healthier is important and urgent. While this issue was called into attention and elaborated in articles and books decades ago and led to some patent applications, no actual solution was found unfortunately. Why is reducing fat in chocolate so difficult? What is the underlying physical mechanism? We have found that this issue is deeply related to the basic science of soft matters, especially to their viscosity and maximally random jammed (MRJ) density φx. All chocolate productions are handling liquid chocolate, a suspension with cocoa solid particles in melted fat, mainly cocoa butter. The fat level cannot be lower than 1-φxin order to have liquid chocolate to flow. Here we show that that with application of an electric field to liquid chocolate, we can aggregate the suspended particles into prolate spheroids. This microstructure change reduces liquid chocolate's viscosity along the flow direction and increases its MRJ density significantly. Hence the fat level in chocolate can be effectively reduced. We are looking forward to a new class of healthier and tasteful chocolate coming to the market soon. Dept. of Physics, Temple Univ, Philadelphia, PA 19122.

  5. The mandibular symphysis as a starting point for the occlusal-level reconstruction of panfacial fractures with bicondylar fractures and interruption of the maxillary and mandibular arches: report of two cases.

    Science.gov (United States)

    Pau, Mauro; Reinbacher, Knut Ernst; Feichtinger, Matthias; Navysany, Kawe; Kärcher, Hans

    2014-06-01

    Panfacial fractures represent a challenge, even for experienced maxillofacial surgeons, because all references for reconstructing the facial skeleton are missing. Logical reconstructive sequencing based on a clear understanding of the correlation between projection and the widths and lengths of facial subunits should enable the surgeon to achieve correct realignment of the bony framework of the face and to prevent late deformity and functional impairment. Reconstruction is particularly challenging in patients presenting with concomitant fractures at the Le Fort I level and affecting the palate, condyles, and mandibular symphysis. In cases without bony loss and sufficient dentition, we believe that accurate fixation of the mandibular symphysis can represent the starting point of a reconstructive sequence that allows successful reconstruction at the Le Fort I level. Two patients were treated in our department by reconstruction starting in the occlusal area through repair of the mandibular symphysis. Both patients considered the postoperative facial shape and profile to be satisfactory and comparable to the pre-injury situation. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Impact of Reconstruction Algorithms on CT Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm Variability.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo

    2016-01-01

    To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (palgorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (palgorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.

  7. Dexamethasone up-regulates skeletal muscle maximal Na+,K+ pump activity by muscle group specific mechanisms in humans

    DEFF Research Database (Denmark)

    Nordsborg, Nikolai; Goodmann, Craig; McKenna, Michael J.

    2005-01-01

    Dexamethasone, a widely clinically used glucocorticoid, increases human skeletal muscle Na+,K+ pump content, but the effects on maximal Na+,K+ pump activity and subunit specific mRNA are unknown. Ten healthy male subjects ingested dexamethasone for 5 days and the effects on Na+,K+ pump content......, maximal activity and subunit specific mRNA level (a1, a2, ß1, ß2, ß3) in deltoid and vastus lateralis muscle were investigated. Before treatment, maximal Na+,K+ pump activity, as well as a1, a2, ß1 and ß2 mRNA levels were higher (P ... increased Na+,K+ pump maximal activity in vastus lateralis and deltoid by 14 ± 7% (P Na+,K+ pump content by 18 ± 9% (P

  8. Evaluating climate field reconstruction techniques using improved emulations of real-world conditions

    Science.gov (United States)

    Wang, J.; Emile-Geay, J.; Guillot, D.; Smerdon, J. E.; Rajaratnam, B.

    2014-01-01

    Pseudoproxy experiments (PPEs) have become an important framework for evaluating paleoclimate reconstruction methods. Most existing PPE studies assume constant proxy availability through time and uniform proxy quality across the pseudoproxy network. Real multiproxy networks are, however, marked by pronounced disparities in proxy quality, and a steep decline in proxy availability back in time, either of which may have large effects on reconstruction skill. A suite of PPEs constructed from a millennium-length general circulation model (GCM) simulation is thus designed to mimic these various real-world characteristics. The new pseudoproxy network is used to evaluate four climate field reconstruction (CFR) techniques: truncated total least squares embedded within the regularized EM (expectation-maximization) algorithm (RegEM-TTLS), the Mann et al. (2009) implementation of RegEM-TTLS (M09), canonical correlation analysis (CCA), and Gaussian graphical models embedded within RegEM (GraphEM). Each method's risk properties are also assessed via a 100-member noise ensemble. Contrary to expectation, it is found that reconstruction skill does not vary monotonically with proxy availability, but also is a function of the type and amplitude of climate variability (forced events vs. internal variability). The use of realistic spatiotemporal pseudoproxy characteristics also exposes large inter-method differences. Despite the comparable fidelity in reconstructing the global mean temperature, spatial skill varies considerably between CFR techniques. Both GraphEM and CCA efficiently exploit teleconnections, and produce consistent reconstructions across the ensemble. RegEM-TTLS and M09 appear advantageous for reconstructions on highly noisy data, but are subject to larger stochastic variations across different realizations of pseudoproxy noise. Results collectively highlight the importance of designing realistic pseudoproxy networks and implementing multiple noise realizations of PPEs

  9. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Pulmonary Nodule Volumetry at Different Low Computed Tomography Radiation Dose Levels With Hybrid and Model-Based Iterative Reconstruction: A Within Patient Analysis.

    Science.gov (United States)

    den Harder, Annemarie M; Willemink, Martin J; van Hamersvelt, Robbert W; Vonken, Evertjan P A; Schilham, Arnold M R; Lammers, Jan-Willem J; Luijk, Bart; Budde, Ricardo P J; Leiner, Tim; de Jong, Pim A

    2016-01-01

    The aim of the study was to determine the effects of dose reduction and iterative reconstruction (IR) on pulmonary nodule volumetry. In this prospective study, 25 patients scheduled for follow-up of pulmonary nodules were included. Computed tomography acquisitions were acquired at 4 dose levels with a median of 2.1, 1.2, 0.8, and 0.6 mSv. Data were reconstructed with filtered back projection (FBP), hybrid IR, and model-based IR. Volumetry was performed using semiautomatic software. At the highest dose level, more than 91% (34/37) of the nodules could be segmented, and at the lowest dose level, this was more than 83%. Thirty-three nodules were included for further analysis. Filtered back projection and hybrid IR did not lead to significant differences, whereas model-based IR resulted in lower volume measurements with a maximum difference of -11% compared with FBP at routine dose. Pulmonary nodule volumetry can be accurately performed at a submillisievert dose with both FBP and hybrid IR.

  11. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    International Nuclear Information System (INIS)

    Lee, Kisung; Kinahan, Paul E; Fessler, Jeffrey A; Miyaoka, Robert S; Janes, Marie; Lewellen, Tom K

    2004-01-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated

  12. Ipsilateral free semitendinosus tendon graft transfer for reconstruction of chronic tears of the Achilles tendon

    Directory of Open Access Journals (Sweden)

    Gougoulias Nikolaos

    2008-07-01

    Full Text Available Abstract Background Many techniques have been developed for the reconstruction of the Achilles tendon in chronic tears. In presence of a large gap (greater than 6 centimetres, tendon augmentation is required. Methods We present our method of minimally invasive semitendinosus reconstruction for the Achilles tendon using one para-midline and one midline incision. Results The first incision is a 5 cm longitudinal incision, made 2 cm proximal and just medial to the palpable end of the residual tendon. The second incision is 3 cm long and is also longitudinal but is 2 cm distal and in the midline to the distal end of the tendon rupture. The distal and proximal Achilles tendon stumps are mobilised. After trying to reduce the gap of the ruptured Achilles tendon, if the gap produced is greater than 6 cm despite maximal plantar flexion of the ankle and traction on the Achilles tendon stumps, the ipsilateral semitendinosus tendon is harvested. The semitendinosus tendon is passed through small incisions in the substance of the proximal stump of the Achilles tendon, and it is sutured to the Achilles tendon. It is then passed beneath the intact skin bridge into the distal incision, and passed from medial to lateral through a transverse tenotomy in the distal stump. With the ankle in maximal plantar flexion, the semitendinosus tendon is sutured to the Achilles tendon at each entry and exit point Conclusion This minimally invasive technique allows reconstruction of the Achilles tendon using the tendon of semitendinosus preserving skin integrity over the site most prone to wound breakdown, and can be especially used to reconstruct the Achilles tendon in the presence of large gap (greater than 6 centimetres.

  13. Compton scatter and randoms corrections for origin ensembles 3D PET reconstructions

    Energy Technology Data Exchange (ETDEWEB)

    Sitek, Arkadiusz [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; Brigham and Women' s Hospital, Boston, MA (United States); Kadrmas, Dan J. [Utah Univ., Salt Lake City, UT (United States). Utah Center for Advanced Imaging Research (UCAIR)

    2011-07-01

    In this work we develop a novel approach to correction for scatter and randoms in reconstruction of data acquired by 3D positron emission tomography (PET) applicable to tomographic reconstruction done by the origin ensemble (OE) approach. The statistical image reconstruction using OE is based on calculation of expectations of the numbers of emitted events per voxel based on complete-data space. Since the OE estimation is fundamentally different than regular statistical estimators such those based on the maximum likelihoods, the standard methods of implementation of scatter and randoms corrections cannot be used. Based on prompts, scatter, and random rates, each detected event is graded in terms of a probability of being a true event. These grades are utilized by the Markov Chain Monte Carlo (MCMC) algorithm used in OE approach for calculation of the expectation over the complete-data space of the number of emitted events per voxel (OE estimator). We show that the results obtained with the OE are almost identical to results obtained by the maximum likelihood-expectation maximization (ML-EM) algorithm for reconstruction for experimental phantom data acquired using Siemens Biograph mCT 3D PET/CT scanner. The developed correction removes artifacts due to scatter and randoms in investigated 3D PET datasets. (orig.)

  14. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    Science.gov (United States)

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  16. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  17. Dose to level I and II axillary lymph nodes and lung by tangential field radiation in patients undergoing postmastectomy radiation with tissue expander reconstruction

    International Nuclear Information System (INIS)

    Russo, James K; Armeson, Kent E; Rhome, Ryan; Spanos, Michele; Harper, Jennifer L

    2011-01-01

    To define the dosimetric coverage of level I/II axillary volumes and the lung volume irradiated in postmastectomy radiotherapy (PMRT) following tissue expander placement. Twenty-three patients were identified who had undergone postmastectomy radiotherapy with tangent only fields. All patients had pre-radiation tissue expander placement and expansion. Thirteen patients had bilateral expander reconstruction. The level I/II axillary volumes were contoured using the RTOG contouring atlas. The patient-specific variables of expander volume, superior-to-inferior location of expander, distance between expanders, expander angle and axillary volume were analyzed to determine their relationship to the axillary volume and lung volume dose. The mean coverage of the level I/II axillary volume by the 95% isodose line (V D95% ) was 23.9% (range 0.3 - 65.4%). The mean Ipsilateral Lung V D50% was 8.8% (2.2-20.9). Ipsilateral and contralateral expander volume correlated to Axillary V D95% in patients with bilateral reconstruction (p = 0.01 and 0.006, respectively) but not those with ipsilateral only reconstruction (p = 0.60). Ipsilateral Lung V D50% correlated with angle of the expander from midline (p = 0.05). In patients undergoing PMRT with tissue expanders, incidental doses delivered by tangents to the axilla, as defined by the RTOG contouring atlas, do not provide adequate coverage. The posterior-superior region of level I and II is the region most commonly underdosed. Axillary volume coverage increased with increasing expander volumes in patients with bilateral reconstruction. Lung dose increased with increasing expander angle from midline. This information should be considered both when placing expanders and when designing PMRT tangent only treatment plans by contouring and targeting the axilla volume when axillary treatment is indicated

  18. A Linear Dynamical Systems Approach to Streamflow Reconstruction Reveals History of Regime Shifts in Northern Thailand

    Science.gov (United States)

    Nguyen, Hung T. T.; Galelli, Stefano

    2018-03-01

    Catchment dynamics is not often modeled in streamflow reconstruction studies; yet, the streamflow generation process depends on both catchment state and climatic inputs. To explicitly account for this interaction, we contribute a linear dynamic model, in which streamflow is a function of both catchment state (i.e., wet/dry) and paleoclimatic proxies. The model is learned using a novel variant of the Expectation-Maximization algorithm, and it is used with a paleo drought record—the Monsoon Asia Drought Atlas—to reconstruct 406 years of streamflow for the Ping River (northern Thailand). Results for the instrumental period show that the dynamic model has higher accuracy than conventional linear regression; all performance scores improve by 45-497%. Furthermore, the reconstructed trajectory of the state variable provides valuable insights about the catchment history—e.g., regime-like behavior—thereby complementing the information contained in the reconstructed streamflow time series. The proposed technique can replace linear regression, since it only requires information on streamflow and climatic proxies (e.g., tree-rings, drought indices); furthermore, it is capable of readily generating stochastic streamflow replicates. With a marginal increase in computational requirements, the dynamic model brings more desirable features and value to streamflow reconstructions.

  19. Enhanced 3D PET OSEM reconstruction using inter-update Metz filtering

    International Nuclear Information System (INIS)

    Jacobson, M.; Levkovitz, R.; Ben-Tal, A.; Thielemans, K.; Spinks, T.; Belluzzo, D.; Pagani, E.; Bettinardi, V.; Gilardi, M.C.; Zverovich, A.; Mitra, G.

    2000-01-01

    We present an enhancement of the OSEM (ordered set expectation maximization) algorithm for 3D PET reconstruction, which we call the inter-update Metz filtered OSEM (IMF-OSEM). The IMF-OSEM algorithm incorporates filtering action into the image updating process in order to improve the quality of the reconstruction. With this technique, the multiplicative correction image - ordinarily used to update image estimates in plain OSEM - is applied to a Metz-filtered version of the image estimate at certain intervals. In addition, we present a software implementation that employs several high-speed features to accelerate reconstruction. These features include, firstly, forward and back projection functions which make full use of symmetry as well as a fast incremental computation technique. Secondly, the software has the capability of running in parallel mode on several processors. The parallelization approach employed yields a significant speed-up, which is nearly independent of the amount of data. Together, these features lead to reasonable reconstruction times even when using large image arrays and non-axially compressed projection data. The performance of IMF-OSEM was tested on phantom data acquired on the GE Advance scanner. Our results demonstrate that an appropriate choice of Metz filter parameters can improve the contrast-noise balance of certain regions of interest relative to both plain and post-filtered OSEM, and to the GE commercial reprojection algorithm software. (author)

  20. Transformation of bipartite non-maximally entangled states into a ...

    Indian Academy of Sciences (India)

    We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between two-level atoms and a single-mode cavity field. A tri-atom W state can be generated by adjusting the interaction times between ...

  1. Maximizing the model for Discounted Stream of Utility from ...

    African Journals Online (AJOL)

    Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...

  2. Optimization of hybrid iterative reconstruction level and evaluation of image quality and radiation dose for pediatric cardiac computed tomography angiography

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Lin; Liang, Changhong [Southern Medical University, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Zhuang, Jian [Guangdong Academy of Medical Sciences, Dept. of Cardiac Surgery, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Huang, Meiping [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Catheterization Lab, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Liu, Hui [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China)

    2017-01-15

    Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose{sup 4} and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose{sup 4} levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose{sup 4}-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose{sup 4} level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose{sup 4} level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose{sup 4} level 4 was optimal for both the full- and half-dose groups. Protocols with iDose{sup 4} level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)

  3. Impact of PET/CT image reconstruction methods and liver uptake normalization strategies on quantitative image analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)

    2016-02-15

    In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)

  4. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    Science.gov (United States)

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  5. Oxidative stress and nitrite dynamics under maximal load in elite athletes: relation to sport type.

    Science.gov (United States)

    Cubrilo, Dejan; Djordjevic, Dusica; Zivkovic, Vladimir; Djuric, Dragan; Blagojevic, Dusko; Spasic, Mihajlo; Jakovljevic, Vladimir

    2011-09-01

    Maximal workload in elite athletes induces increased generation of reactive oxygen/nitrogen species (RONS) and oxidative stress, but the dynamics of RONS production are not fully explored. The aim of our study was to examine the effects of long-term engagement in sports with different energy requirements (aerobic, anaerobic, and aerobic/anaerobic) on oxidative stress parameters during progressive exercise test. Concentrations of lactates, nitric oxide (NO) measured through stabile end product-nitrites (NO(2) (-)), superoxide anion radical (O(2) (•-)), and thiobarbituric reactive substances (TBARS) as index of lipid peroxidation were determined in rest, after maximal workload, and at 4 and 10th min of recovery in blood plasma of top level competitors in rowing, cycling, and taekwondo. Results showed that sportmen had similar concentrations of lactates and O(2) (•-) in rest. Nitrite concentrations in rest were the lowest in taekwondo fighters, while rowers had the highest levels among examined groups. The order of magnitude for TBARS level in the rest was bicycling > taekwondo > rowing. During exercise at maximal intensity, the concentration of lactate significantly elevated to similar levels in all tested sportsmen and they were persistently elevated during recovery period of 4 and 10 min. There were no significant changes in O(2) (•-), nitrite, and TBARS levels neither at the maximum intensity of exercise nor during the recovery period comparing to the rest period in examined individuals. Our results showed that long term different training strategies establish different basal nitrites and lipid peroxidation levels in sportmen. However, progressive exercise does not influence basal nitrite and oxidative stress parameters level neither at maximal load nor during the first 10 min of recovery in sportmen studied.

  6. Fast image reconstruction for Compton camera using stochastic origin ensemble approach.

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2011-01-01

    Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time

  7. 3.5D dynamic PET image reconstruction incorporating kinetics-based clusters

    International Nuclear Information System (INIS)

    Lu Lijun; Chen Wufan; Karakatsanis, Nicolas A; Rahmim, Arman; Tang Jing

    2012-01-01

    Standard 3D dynamic positron emission tomographic (PET) imaging consists of independent image reconstructions of individual frames followed by application of appropriate kinetic model to the time activity curves at the voxel or region-of-interest (ROI). The emerging field of 4D PET reconstruction, by contrast, seeks to move beyond this scheme and incorporate information from multiple frames within the image reconstruction task. Here we propose a novel reconstruction framework aiming to enhance quantitative accuracy of parametric images via introduction of priors based on voxel kinetics, as generated via clustering of preliminary reconstructed dynamic images to define clustered neighborhoods of voxels with similar kinetics. This is then followed by straightforward maximum a posteriori (MAP) 3D PET reconstruction as applied to individual frames; and as such the method is labeled ‘3.5D’ image reconstruction. The use of cluster-based priors has the advantage of further enhancing quantitative performance in dynamic PET imaging, because: (a) there are typically more voxels in clusters than in conventional local neighborhoods, and (b) neighboring voxels with distinct kinetics are less likely to be clustered together. Using realistic simulated 11 C-raclopride dynamic PET data, the quantitative performance of the proposed method was investigated. Parametric distribution-volume (DV) and DV ratio (DVR) images were estimated from dynamic image reconstructions using (a) maximum-likelihood expectation maximization (MLEM), and MAP reconstructions using (b) the quadratic prior (QP-MAP), (c) the Green prior (GP-MAP) and (d, e) two proposed cluster-based priors (CP-U-MAP and CP-W-MAP), followed by graphical modeling, and were qualitatively and quantitatively compared for 11 ROIs. Overall, the proposed dynamic PET reconstruction methodology resulted in substantial visual as well as quantitative accuracy improvements (in terms of noise versus bias performance) for parametric DV

  8. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  9. Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.

    Directory of Open Access Journals (Sweden)

    Christian L Barrett

    2006-05-01

    Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.

  10. Design and Application of the Reconstruction Software for the BaBar Calorimeter

    International Nuclear Information System (INIS)

    Strother, Philip David; Imperial Coll., London

    2006-01-01

    The BaBar high energy physics experiment will be in operation at the PEP-II asymmetric e + e - collider in Spring 1999. The primary purpose of the experiment is the investigation of CP violation in the neutral B meson system. The electromagnetic calorimeter forms a central part of the experiment and new techniques are employed in data acquisition and reconstruction software to maximize the capability of this device. The use of a matched digital filter in the feature extraction in the front end electronics is presented. The performance of the filter in the presence of the expected high levels of soft photon background from the machine is evaluated. The high luminosity of the PEP-II machine and the demands on the precision of the calorimeter require reliable software that allows for increased physics capability. BaBar has selected C++ as its primary programming language and object oriented analysis and design as its coding paradigm. The application of this technology to the reconstruction software for the calorimeter is presented. The design of the systems for clustering, cluster division, track matching, particle identification and global calibration is discussed with emphasis on the provisions in the design for increased physics capability as levels of understanding of the detector increase. The CP violating channel B 0 → J/Ψ K s 0 has been studied in the two lepton, two π 0 final state. The contribution of this channel to the evaluation of the angle sin 2β of the unitarity triangle is compared to that from the charged pion final state. An error of 0.34 on this quantity is expected after 1 year of running at design luminosity

  11. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  12. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  13. Skin sparing mastectomy: Technique and suggested methods of reconstruction

    International Nuclear Information System (INIS)

    Farahat, A.M.; Hashim, T.; Soliman, H.O.; Manie, T.M.; Soliman, O.M.

    2014-01-01

    To demonstrate the feasibility and accessibility of performing adequate mastectomy to extirpate the breast tissue, along with en-block formal axillary dissection performed from within the same incision. We also compared different methods of immediate breast reconstruction used to fill the skin envelope to achieve the best aesthetic results. Methods: 38 patients with breast cancer underwent skin-sparing mastectomy with formal axillary clearance, through a circum-areolar incision. Immediate breast reconstruction was performed using different techniques to fill in the skin envelope. Two reconstruction groups were assigned; group 1: Autologus tissue transfer only (n= 24), and group 2: implant augmentation (n= 14). Autologus tissue transfer: The techniques used included filling in the skin envelope using Extended Latissimus Dorsi flap (18 patients) and Pedicled TRAM flap (6 patients). Augmentation with implants: Subpectoral implants(4 patients), a rounded implant placed under the pectoralis major muscle to augment an LD reconstructed breast. LD pocket (10 patients), an anatomical implant placed over the pectoralis major muscle within a pocket created by the LD flap. No contra-lateral procedure was performed in any of the cases to achieve symmetry. Results: All cases underwent adequate excision of the breast tissue along with en-block complete axillary clearance (when indicated), without the need for an additional axillary incision. Eighteen patients underwent reconstruction using extended LD flaps only, six had TRAM flaps, four had augmentation using implants placed below the pectoralis muscle along with LD flaps, and ten had implants placed within the LD pocket. Breast shape, volume and contour were successfully restored in all patients. Adequate degree of ptosis was achieved, to ensure maximal symmetry. Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian

  14. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    Science.gov (United States)

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  15. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    Energy Technology Data Exchange (ETDEWEB)

    Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)

    2011-02-21

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  16. Socioeconomic position and breast reconstruction in Danish women

    DEFF Research Database (Denmark)

    Hvilsom, Gitte B; Hölmich, Lisbet R; Frederiksen, Kirsten Skovsgaard

    2011-01-01

    Few studies have been conducted on the socioeconomic position of women undergoing breast reconstruction, and none have been conducted in the Danish population. We investigated the association between educational level and breast reconstruction in a nationwide cohort of Danish women with breast...

  17. Rapid maximum likelihood ancestral state reconstruction of continuous characters: A rerooting-free algorithm.

    Science.gov (United States)

    Goolsby, Eric W

    2017-04-01

    Ancestral state reconstruction is a method used to study the evolutionary trajectories of quantitative characters on phylogenies. Although efficient methods for univariate ancestral state reconstruction under a Brownian motion model have been described for at least 25 years, to date no generalization has been described to allow more complex evolutionary models, such as multivariate trait evolution, non-Brownian models, missing data, and within-species variation. Furthermore, even for simple univariate Brownian motion models, most phylogenetic comparative R packages compute ancestral states via inefficient tree rerooting and full tree traversals at each tree node, making ancestral state reconstruction extremely time-consuming for large phylogenies. Here, a computationally efficient method for fast maximum likelihood ancestral state reconstruction of continuous characters is described. The algorithm has linear complexity relative to the number of species and outperforms the fastest existing R implementations by several orders of magnitude. The described algorithm is capable of performing ancestral state reconstruction on a 1,000,000-species phylogeny in fewer than 2 s using a standard laptop, whereas the next fastest R implementation would take several days to complete. The method is generalizable to more complex evolutionary models, such as phylogenetic regression, within-species variation, non-Brownian evolutionary models, and multivariate trait evolution. Because this method enables fast repeated computations on phylogenies of virtually any size, implementation of the described algorithm can drastically alleviate the computational burden of many otherwise prohibitively time-consuming tasks requiring reconstruction of ancestral states, such as phylogenetic imputation of missing data, bootstrapping procedures, Expectation-Maximization algorithms, and Bayesian estimation. The described ancestral state reconstruction algorithm is implemented in the Rphylopars

  18. Studies for a common selection software environment in ATLAS from the Level-2 Trigger to the offline reconstruction

    CERN Document Server

    Wiedenmann, W; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A T; Wengler, T; Werner, P; Wheeler, S; Wickens, F J; Wielers, M; Zobernig, G; NSS-MIC 2003 - IEEE Nuclear Science Symposium and Medical Imaging Conference, Part 1

    2004-01-01

    The Atlas High Level Trigger's primary function of event selection will be accomplished with a Level-2 trigger farm and an Event Filter farm, both running software components developed in the Atlas offline reconstruction framework. While this approach provides a unified software framework for event selection, it poses strict requirements on offline components critical for the Level-2 trigger. A Level-2 decision in Atlas must typically be accomplished within 10 ms and with multiple event processing in concurrent threads. In order to address these constraints, prototypes have been developed that incorporate elements of the Atlas Data Flow -, High Level Trigger -, and offline framework software. To realize a homogeneous software environment for offline components in the High Level Trigger, the Level-2 Steering Controller was developed. With electron/gamma- and muon-selection slices it has been shown that the required performance can be reached, if the offline components used are carefully designed and optimized ...

  19. A 1500-year reconstruction of annual mean temperature for temperate North America on decadal-to-multidecadal time scales

    International Nuclear Information System (INIS)

    Trouet, V; Diaz, H F; Wahl, E R; Viau, A E; Graham, R; Graham, N; Cook, E R

    2013-01-01

    We present two reconstructions of annual average temperature over temperate North America: a tree-ring based reconstruction at decadal resolution (1200–1980 CE) and a pollen-based reconstruction at 30 year resolution that extends back to 480 CE. We maximized reconstruction length by using long but low-resolution pollen records and applied a three-tier calibration scheme for this purpose. The tree-ring-based reconstruction was calibrated against instrumental annual average temperatures on annual and decadal scale, it was then reduced to a lower resolution, and was used as a calibration target for the pollen-based reconstruction. Before the late-19th to the early-21st century, there are three prominent low-frequency periods in our extended reconstruction starting at 480 CE, notably the Dark Ages cool period (about 500–700 CE) and Little Ice Age (about 1200–1900 CE), and the warmer medieval climate anomaly (MCA; about 750–1100 CE). The 9th and the 11th century are the warmest centuries and they constitute the core of the MCA in our reconstruction, a period characterized by centennial-scale aridity in the North American West. These two warm peaks are slightly warmer than the baseline period (1904–1980), but nevertheless much cooler than temperate North American temperatures during the early-21st century. (letter)

  20. The slack test does not assess maximal shortening velocity of muscle fascicle in human.

    Science.gov (United States)

    Hager, Robin; Dorel, Sylvain; Nordez, Antoine; Rabita, Giuseppe; Couturier, Antoine; Hauraix, Hugo; Duchateau, Jacques; Guilhem, Gaël

    2018-06-14

    The application of a series of extremely high accelerative motor-driven quick releases while muscles contract isometrically (i.e. slack test) has been proposed to assess unloaded velocity in human muscle. This study aimed to measure gastrocnemius medialis fascicle (V F ) and tendinous tissues shortening velocity during motor-driven quick releases performed at various activation levels to assess the applicability of the slack test method in human. Maximal fascicle shortening velocity and joint velocity recorded during quick releases and during fast contraction without external load (ballistic condition) were compared. Gastrocnemius medialis fascicle behaviour was investigated from 25 participants using high-frame rate ultrasound during quick releases performed at various activation levels (from 0% to 60% of maximal voluntary isometric torque) and ballistic contractions. Unloaded joint velocity calculated using the slack test method increased whereas V F decreased with muscle activation level (P≤0.03). Passive and low-level quick releases elicited higher V F values (≥ 41.4±9.7 cm.s -1 ) compared to ballistic condition (36.3±8.7 cm.s -1 ), while quick releases applied at 60% of maximal voluntary isometric torque produced the lowest V F These findings suggest that initial fascicle length, complex fascicle-tendon interactions, unloading reflex and motor-driven movement pattern strongly influence and limit the shortening velocity achieved during the slack test. Furthermore, V F elicited by quick releases is likely to reflect substantial contributions of passive processes. Therefore, the slack test is not appropriate to assess maximal muscle shortening velocity in vivo. © 2018. Published by The Company of Biologists Ltd.

  1. Accelerated 3D-OSEM image reconstruction using a Beowulf PC cluster for pinhole SPECT

    International Nuclear Information System (INIS)

    Zeniya, Tsutomu; Watabe, Hiroshi; Sohlberg, Antti; Iida, Hidehiro

    2007-01-01

    A conventional pinhole single-photon emission computed tomography (SPECT) with a single circular orbit has limitations associated with non-uniform spatial resolution or axial blurring. Recently, we demonstrated that three-dimensional (3D) images with uniform spatial resolution and no blurring can be obtained by complete data acquired using two-circular orbit, combined with the 3D ordered subsets expectation maximization (OSEM) reconstruction method. However, a long computation time is required to obtain the reconstruction image, because of the fact that 3D-OSEM is an iterative method and two-orbit acquisition doubles the size of the projection data. To reduce the long reconstruction time, we parallelized the two-orbit pinhole 3D-OSEM reconstruction process by using a Beowulf personal computer (PC) cluster. The Beowulf PC cluster consists of seven PCs connected to Gbit Ethernet switches. Message passing interface protocol was utilized for parallelizing the reconstruction process. The projection data in a subset are distributed to each PC. The partial image forward-and back-projected in each PC is transferred to all PCs. The current image estimate on each PC is updated after summing the partial images. The performance of parallelization on the PC cluster was evaluated using two independent projection data sets acquired by a pinhole SPECT system with two different circular orbits. Parallelization using the PC cluster improved the reconstruction time with increasing number of PCs. The reconstruction time of 54 min by the single PC was decreased to 10 min when six or seven PCs were used. The speed-up factor was 5.4. The reconstruction image by the PC cluster was virtually identical with that by the single PC. Parallelization of 3D-OSEM reconstruction for pinhole SPECT using the PC cluster can significantly reduce the computation time, whereas its implementation is simple and inexpensive. (author)

  2. Iterative reconstruction reduces abdominal CT dose

    International Nuclear Information System (INIS)

    Martinsen, Anne Catrine Trægde; Sæther, Hilde Kjernlie; Hol, Per Kristian; Olsen, Dag Rune; Skaane, Per

    2012-01-01

    Objective: In medical imaging, lowering radiation dose from computed tomography scanning, without reducing diagnostic performance is a desired achievement. Iterative image reconstruction may be one tool to achieve dose reduction. This study reports the diagnostic performance using a blending of 50% statistical iterative reconstruction (ASIR) and filtered back projection reconstruction (FBP) compared to standard FBP image reconstruction at different dose levels for liver phantom examinations. Methods: An anthropomorphic liver phantom was scanned at 250, 185, 155, 140, 120 and 100 mA s, on a 64-slice GE Lightspeed VCT scanner. All scans were reconstructed with ASIR and FBP. Four readers evaluated independently on a 5-point scale 21 images, each containing 32 test sectors. In total 672 areas were assessed. ROC analysis was used to evaluate the differences. Results: There was a difference in AUC between the 250 mA s FBP images and the 120 and 100 mA s FBP images. ASIR reconstruction gave a significantly higher diagnostic performance compared to standard reconstruction at 100 mA s. Conclusion: A blending of 50–90% ASIR and FBP may improve image quality of low dose CT examinations of the liver, and thus give a potential for reducing radiation dose.

  3. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  4. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  5. Preoperative estimation of run off in patients with multiple level arterial obstructions as a guide to partial reconstructive surgery

    DEFF Research Database (Denmark)

    Noer, Ivan; Tønnesen, K H; Sager, P

    1978-01-01

    Preoperative measurements of direct femoral artery systolic pressure, indirect ankle systolic pressure and direct brachial artery systolic pressure were carried out in nine patients with severe ischemia and arterial occlusions both proximal and distal to the ingvinal ligament. The pressure......-rise at the ankle was estimated preoperatively by assuming that the ankle pressure would rise in proportion to the rise in femoral artery pressure. Thus it was predicted that reconstruction of the iliac obstruction with aorta-femoral pressure gradients from 44 to 96 mm Hg would result in a rise in ankle pressure...... of 16--54 mm Hg. The actual rise in ankle pressure one month after reconstruction of the iliac arteries ranged from 10 to 46 mm Hg and was well correlated to the preoperative estimations. In conclusion, by proper pressure measurements the run-off problem of multiple level arterial occlusions can...

  6. Statistical inference approach to structural reconstruction of complex networks from binary time series

    Science.gov (United States)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  7. Facade Reconstruction with Generalized 2.5d Grids

    Directory of Open Access Journals (Sweden)

    J. Demantke

    2013-10-01

    Full Text Available Reconstructing fine facade geometry from MMS lidar data remains a challenge: In addition to being inherently sparse, the point cloud provided by a single street point of view is necessarily incomplete. We propose a simple framework to estimate the facade surface with a deformable 2.5d grid. Computations are performed in a "sensor-oriented" coordinate system that maximizes consistency with the data. the algorithm allows to retrieve the facade geometry without priori knowledge. It can thus be automatically applied to a large amount of data in spite of the variability of encountered architectural forms. The 2.5d image structure of the output makes it compatible with storage and real-time constraints of immersive navigation.

  8. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  9. Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis

    International Nuclear Information System (INIS)

    Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor

    2011-01-01

    Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 ± 0.44 mm (Mean ± STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 ± 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use.

  10. Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor [School of Computing, Queen' s University, Kingston, Ontario K7L-3N6 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia V6T-1Z4 (Canada); Vancouver Cancer Centre, Vancouver, British Columbia V5Z-1E6 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia V6T-1Z4 (Canada); School of Computing, Queen' s University, Kingston, Ontario K7L-3N6 (Canada)

    2011-10-15

    Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 {+-} 0.44 mm (Mean {+-} STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 {+-} 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use.

  11. Calsequestrin content and SERCA determine normal and maximal Ca2+ storage levels in sarcoplasmic reticulum of fast- and slow-twitch fibres of rat.

    Science.gov (United States)

    Murphy, Robyn M; Larkins, Noni T; Mollica, Janelle P; Beard, Nicole A; Lamb, Graham D

    2009-01-15

    Whilst calsequestrin (CSQ) is widely recognized as the primary Ca2+ buffer in the sarcoplasmic reticulum (SR) in skeletal muscle fibres, its total buffering capacity and importance have come into question. This study quantified the absolute amount of CSQ isoform 1 (CSQ1, the primary isoform) present in rat extensor digitorum longus (EDL) and soleus fibres, and related this to their endogenous and maximal SR Ca2+ content. Using Western blotting, the entire constituents of minute samples of muscle homogenates or segments of individual muscle fibres were compared with known amounts of purified CSQ1. The fidelity of the analysis was proven by examining the relative signal intensity when mixing muscle samples and purified CSQ1. The CSQ1 contents of EDL fibres, almost exclusively type II fibres, and soleus type I fibres [SOL (I)] were, respectively, 36 +/- 2 and 10 +/- 1 micromol (l fibre volume)(-1), quantitatively accounting for the maximal SR Ca2+ content of each. Soleus type II [SOL (II)] fibres (approximately 20% of soleus fibres) had an intermediate amount of CSQ1. Every SOL (I) fibre examined also contained some CSQ isoform 2 (CSQ2), which was absent in every EDL and other type II fibre except for trace amounts in one case. Every EDL and other type II fibre had a high density of SERCA1, the fast-twitch muscle sarco(endo)plasmic reticulum Ca2+-ATPase isoform, whereas there was virtually no SERCA1 in any SOL (I) fibre. Maximal SR Ca2+ content measured in skinned fibres increased with CSQ1 content, and the ratio of endogenous to maximal Ca2+ content was inversely correlated with CSQ1 content. The relative SR Ca2+ content that could be maintained in resting cytoplasmic conditions was found to be much lower in EDL fibres than in SOL (I) fibres (approximately 20 versus >60%). Leakage of Ca2+ from the SR in EDL fibres could be substantially reduced with a SR Ca2+ pump blocker and increased by adding creatine to buffer cytoplasmic [ADP] at a higher level, both results

  12. The importance of ship log data: reconstructing North Atlantic, European and Mediterranean sea level pressure fields back to 1750

    Energy Technology Data Exchange (ETDEWEB)

    Kuettel, M.; Wanner, H. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), and Institute of Geography, Climatology and Meteorology, Bern (Switzerland); Xoplaki, E. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), and Institute of Geography, Climatology and Meteorology, Bern (Switzerland); EEWRC, The Cyprus Institute, Nicosia (Cyprus); Gallego, D. [Universidad Pablo de Olavide de Sevilla, Departamento de Sistemas Fisicos, Quimicos y Naturales, Sevilla (Spain); Luterbacher, J. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), and Institute of Geography, Climatology and Meteorology, Bern (Switzerland); Justus-Liebig University of Giessen, Department of Geography, Climatology, Climate Dynamics and Climate Change, Giessen (Germany); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de CC Fisicas, Madrid (Spain); Allan, R. [Met Office Hadley Centre, Exeter (United Kingdom); Barriendos, M. [University of Barcelona, Department of Modern History, Barcelona (Spain); Jones, P.D. [University of East Anglia, Climatic Research Unit, School of Environmental Sciences, Norwich (United Kingdom); Wheeler, D. [University of Sunderland, Faculty of Applied Sciences, Sunderland (United Kingdom)

    2010-06-15

    Local to regional climate anomalies are to a large extent determined by the state of the atmospheric circulation. The knowledge of large-scale sea level pressure (SLP) variations in former times is therefore crucial when addressing past climate changes across Europe and the Mediterranean. However, currently available SLP reconstructions lack data from the ocean, particularly in the pre-1850 period. Here we present a new statistically-derived 5 x 5 resolved gridded seasonal SLP dataset covering the eastern North Atlantic, Europe and the Mediterranean area (40 W-50 E; 20 N-70 N) back to 1750 using terrestrial instrumental pressure series and marine wind information from ship logbooks. For the period 1750-1850, the new SLP reconstruction provides a more accurate representation of the strength of the winter westerlies as well as the location and variability of the Azores High than currently available multiproxy pressure field reconstructions. These findings strongly support the potential of ship logbooks as an important source to determine past circulation variations especially for the pre-1850 period. This new dataset can be further used for dynamical studies relating large-scale atmospheric circulation to temperature and precipitation variability over the Mediterranean and Eurasia, for the comparison with outputs from GCMs as well as for detection and attribution studies. (orig.)

  13. Software Architecture Reconstruction Method, a Survey

    OpenAIRE

    Zainab Nayyar; Nazish Rafique

    2014-01-01

    Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...

  14. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  15. Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization

    Science.gov (United States)

    Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.

    2016-06-01

    Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.

  16. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  17. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  18. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  19. Optimization of the dose level for a given treatment plan to maximize the complication-free tumor cure

    International Nuclear Information System (INIS)

    Lind, B.K.; Mavroidis, P.; Hyoedynmaa, S.; Kappas, C.

    1999-01-01

    During the past decade, tumor and normal tissue reactions after radiotherapy have been increasingly quantified in radiobiological terms. For this purpose, response models describing the dependence of tumor and normal tissue reactions on the irradiated volume, heterogeneity of the delivered dose distribution and cell sensitivity variations can be taken into account. The probability of achieving a good treatment outcome can be increased by using an objective function such as P + , the probability of complication-free tumor control. A new procedure is presented, which quantifies P + from the dose delivery on 2D surfaces and 3D volumes and helps the user of any treatment planning system (TPS) to select the best beam orientations, the best beam modalities and the most suitable beam energies. The final step of selecting the prescribed dose level is made by a renormalization of the entire dose plan until the value of P + is maximized. The index P + makes use of clinically established dose-response parameters, for tumors and normal tissues of interest, in order to improve its clinical relevance. The results, using P + , are compared against the assessments of experienced medical physicists and radiation oncologists for two clinical cases. It is observed that when the absorbed dose level for a given treatment plan is increased, the treatment outcome first improves rapidly. As the dose approaches the tolerance of normal tissues the complication-free curve begins to drop. The optimal dose level is often just below this point and it depends on the geometry of each patient and target volume. Furthermore, a more conformal dose delivery to the target results in a higher control rate for the same complication level. This effect can be quantified by the increased value of the P + parameter. (orig.)

  20. Incorporating HYPR de-noising within iterative PET reconstruction (HYPR-OSEM)

    Science.gov (United States)

    (Kevin Cheng, Ju-Chieh; Matthews, Julian; Sossi, Vesna; Anton-Rodriguez, Jose; Salomon, André; Boellaard, Ronald

    2017-08-01

    HighlY constrained back-PRojection (HYPR) is a post-processing de-noising technique originally developed for time-resolved magnetic resonance imaging. It has been recently applied to dynamic imaging for positron emission tomography and shown promising results. In this work, we have developed an iterative reconstruction algorithm (HYPR-OSEM) which improves the signal-to-noise ratio (SNR) in static imaging (i.e. single frame reconstruction) by incorporating HYPR de-noising directly within the ordered subsets expectation maximization (OSEM) algorithm. The proposed HYPR operator in this work operates on the target image(s) from each subset of OSEM and uses the sum of the preceding subset images as the composite which is updated every iteration. Three strategies were used to apply the HYPR operator in OSEM: (i) within the image space modeling component of the system matrix in forward-projection only, (ii) within the image space modeling component in both forward-projection and back-projection, and (iii) on the image estimate after the OSEM update for each subset thus generating three forms: (i) HYPR-F-OSEM, (ii) HYPR-FB-OSEM, and (iii) HYPR-AU-OSEM. Resolution and contrast phantom simulations with various sizes of hot and cold regions as well as experimental phantom and patient data were used to evaluate the performance of the three forms of HYPR-OSEM, and the results were compared to OSEM with and without a post reconstruction filter. It was observed that the convergence in contrast recovery coefficients (CRC) obtained from all forms of HYPR-OSEM was slower than that obtained from OSEM. Nevertheless, HYPR-OSEM improved SNR without degrading accuracy in terms of resolution and contrast. It achieved better accuracy in CRC at equivalent noise level and better precision than OSEM and better accuracy than filtered OSEM in general. In addition, HYPR-AU-OSEM has been determined to be the more effective form of HYPR-OSEM in terms of accuracy and precision based on the studies

  1. Non-perturbative renormalization in coordinate space for Nf=2 maximally twisted mass fermions with tree-level Symanzik improved gauge action

    International Nuclear Information System (INIS)

    Cichy, Krzysztof; Adam Mickiewicz Univ., Poznan; Jansen, Karl; Korcyl, Piotr; Jagiellonian Univ., Krakow

    2012-07-01

    We present results of a lattice QCD application of a coordinate space renormalization scheme for the extraction of renormalization constants for flavour non-singlet bilinear quark operators. The method consists in the analysis of the small-distance behaviour of correlation functions in Euclidean space and has several theoretical and practical advantages, in particular: it is gauge invariant, easy to implement and has relatively low computational cost. The values of renormalization constants in the X-space scheme can be converted to the MS scheme via 4-loop continuum perturbative formulae. Our results for N f =2 maximally twisted mass fermions with tree-level Symanzik improved gauge action are compared to the ones from the RI-MOM scheme and show full agreement with this method. (orig.)

  2. Mastectomy Skin Necrosis After Breast Reconstruction: A Comparative Analysis Between Autologous Reconstruction and Implant-Based Reconstruction.

    Science.gov (United States)

    Sue, Gloria R; Lee, Gordon K

    2018-05-01

    Mastectomy skin necrosis is a significant problem after breast reconstruction. We sought to perform a comparative analysis on this complication between patients undergoing autologous breast reconstruction and patients undergoing 2-stage expander implant breast reconstruction. A retrospective review was performed on consecutive patients undergoing autologous breast reconstruction or 2-stage expander implant breast reconstruction by the senior author from 2006 through 2015. Patient demographic factors including age, body mass index, history of diabetes, history of smoking, and history of radiation to the breast were collected. Our primary outcome measure was mastectomy skin necrosis. Fisher exact test was used for statistical analysis between the 2 patient cohorts. The treatment patterns of mastectomy skin necrosis were then analyzed. We identified 204 patients who underwent autologous breast reconstruction and 293 patients who underwent 2-stage expander implant breast reconstruction. Patients undergoing autologous breast reconstruction were older, heavier, more likely to have diabetes, and more likely to have had prior radiation to the breast compared with patients undergoing implant-based reconstruction. The incidence of mastectomy skin necrosis was 30.4% of patients in the autologous group compared with only 10.6% of patients in the tissue expander group (P care in the autologous group, only 3.2% were treated with local wound care in the tissue expander group (P skin necrosis is significantly more likely to occur after autologous breast reconstruction compared with 2-stage expander implant-based breast reconstruction. Patients with autologous reconstructions are more readily treated with local wound care compared with patients with tissue expanders, who tended to require operative treatment of this complication. Patients considering breast reconstruction should be counseled appropriately regarding the differences in incidence and management of mastectomy skin

  3. Revision allograft reconstruction of the lateral collateral ligament complex in elbows with previous failed reconstruction and persistent posterolateral rotatory instability.

    Science.gov (United States)

    Baghdadi, Yaser M K; Morrey, Bernard F; O'Driscoll, Shawn W; Steinmann, Scott P; Sanchez-Sotelo, Joaquin

    2014-07-01

    six elbows were rated with a good or excellent result. All patients with persistent instability had some degree of preoperative bone loss. Revision allograft reconstruction of the LCLC is an option for treating recurrent PLRI, although this is a complex and resistant problem, and nearly ½ of the patients in this cohort either had persistent instability and/or had a fair or poor elbow score. Level IV, therapeutic study. See Instructions for Authors for a complete description of levels of evidence.

  4. ACTS: from ATLAS software towards a common track reconstruction software

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00349786; The ATLAS collaboration; Salzburger, Andreas; Kiehn, Moritz; Hrdinka, Julia; Calace, Noemi

    2017-01-01

    Reconstruction of charged particles' trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is de...

  5. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  6. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  7. Isotope specific resolution recovery image reconstruction in high resolution PET imaging.

    Science.gov (United States)

    Kotasidis, Fotis A; Angelis, Georgios I; Anton-Rodriguez, Jose; Matthews, Julian C; Reader, Andrew J; Zaidi, Habib

    2014-05-01

    Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution recovery image reconstruction. The

  8. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  9. Restoration of the analytically reconstructed OpenPET images by the method of convex projections

    Energy Technology Data Exchange (ETDEWEB)

    Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering

    2011-07-01

    We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)

  10. Electron Reconstruction in the CMS Electromagnetic Calorimeter

    CERN Document Server

    Meschi, Emilio; Seez, Christopher; Vikas, Pratibha

    2001-01-01

    This note describes the reconstruction of electrons using the electromagnetic calorimeter (ECAL) alone. This represents the first step in the High Level Trigger reconstruction and selection chain. By making "super-clusters" (i.e. clusters of clusters) much of the energy radiated by bremsstrahlung in the tracker material can be recovered. Representative performance figures for energy and position resolution in the barrel are given.

  11. LOAD THAT MAXIMIZES POWER OUTPUT IN COUNTERMOVEMENT JUMP

    Directory of Open Access Journals (Sweden)

    Pedro Jimenez-Reyes

    2016-02-01

    Full Text Available ABSTRACT Introduction: One of the main problems faced by strength and conditioning coaches is the issue of how to objectively quantify and monitor the actual training load undertaken by athletes in order to maximize performance. It is well known that performance of explosive sports activities is largely determined by mechanical power. Objective: This study analysed the height at which maximal power output is generated and the corresponding load with which is achieved in a group of male-trained track and field athletes in the test of countermovement jump (CMJ with extra loads (CMJEL. Methods: Fifty national level male athletes in sprinting and jumping performed a CMJ test with increasing loads up to a height of 16 cm. The relative load that maximized the mechanical power output (Pmax was determined using a force platform and lineal encoder synchronization and estimating the power by peak power, average power and flight time in CMJ. Results: The load at which the power output no longer existed was at a height of 19.9 ± 2.35, referring to a 99.1 ± 1% of the maximum power output. The load that maximizes power output in all cases has been the load with which an athlete jump a height of approximately 20 cm. Conclusion: These results highlight the importance of considering the height achieved in CMJ with extra load instead of power because maximum power is always attained with the same height. We advise for the preferential use of the height achieved in CMJEL test, since it seems to be a valid indicative of an individual's actual neuromuscular potential providing a valid information for coaches and trainers when assessing the performance status of our athletes and to quantify and monitor training loads, measuring only the height of the jump in the exercise of CMJEL.

  12. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    International Nuclear Information System (INIS)

    Ensslin, Torsten A.; Frommert, Mona

    2011-01-01

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.

  13. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  14. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  15. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  16. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  17. HeinzelCluster: accelerated reconstruction for FORE and OSEM3D.

    Science.gov (United States)

    Vollmar, S; Michel, C; Treffert, J T; Newport, D F; Casey, M; Knöss, C; Wienhard, K; Liu, X; Defrise, M; Heiss, W D

    2002-08-07

    Using iterative three-dimensional (3D) reconstruction techniques for reconstruction of positron emission tomography (PET) is not feasible on most single-processor machines due to the excessive computing time needed, especially so for the large sinogram sizes of our high-resolution research tomograph (HRRT). In our first approach to speed up reconstruction time we transform the 3D scan into the format of a two-dimensional (2D) scan with sinograms that can be reconstructed independently using Fourier rebinning (FORE) and a fast 2D reconstruction method. On our dedicated reconstruction cluster (seven four-processor systems, Intel PIII@700 MHz, switched fast ethernet and Myrinet, Windows NT Server), we process these 2D sinograms in parallel. We have achieved a speedup > 23 using 26 processors and also compared results for different communication methods (RPC, Syngo, Myrinet GM). The other approach is to parallelize OSEM3D (implementation of C Michel), which has produced the best results for HRRT data so far and is more suitable for an adequate treatment of the sinogram gaps that result from the detector geometry of the HRRT. We have implemented two levels of parallelization for four dedicated cluster (a shared memory fine-grain level on each node utilizing all four processors and a coarse-grain level allowing for 15 nodes) reducing the time for one core iteration from over 7 h to about 35 min.

  18. Design and Application of the Reconstruction Software for the BaBar Calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Strother, Philip David; /Imperial Coll., London

    2006-07-07

    The BaBar high energy physics experiment will be in operation at the PEP-II asymmetric e{sup +}e{sup -} collider in Spring 1999. The primary purpose of the experiment is the investigation of CP violation in the neutral B meson system. The electromagnetic calorimeter forms a central part of the experiment and new techniques are employed in data acquisition and reconstruction software to maximize the capability of this device. The use of a matched digital filter in the feature extraction in the front end electronics is presented. The performance of the filter in the presence of the expected high levels of soft photon background from the machine is evaluated. The high luminosity of the PEP-II machine and the demands on the precision of the calorimeter require reliable software that allows for increased physics capability. BaBar has selected C++ as its primary programming language and object oriented analysis and design as its coding paradigm. The application of this technology to the reconstruction software for the calorimeter is presented. The design of the systems for clustering, cluster division, track matching, particle identification and global calibration is discussed with emphasis on the provisions in the design for increased physics capability as levels of understanding of the detector increase. The CP violating channel B{sup 0} {yields} J/{Psi}K{sub S}{sup 0} has been studied in the two lepton, two {pi}{sup 0} final state. The contribution of this channel to the evaluation of the angle sin 2{beta} of the unitarity triangle is compared to that from the charged pion final state. An error of 0.34 on this quantity is expected after 1 year of running at design luminosity.

  19. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    Science.gov (United States)

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation

  20. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  1. Multiproxy summer and winter surface air temperature field reconstructions for southern South America covering the past centuries

    Energy Technology Data Exchange (ETDEWEB)

    Neukom, R.; Grosjean, M.; Wanner, H. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), Bern (Switzerland); University of Bern, Institute of Geography, Climatology and Meteorology, Bern (Switzerland); Luterbacher, J. [Justus Liebig University of Giessen, Department of Geography, Climatology, Climate Dynamics and Climate Change, Giessen (Germany); Villalba, R.; Morales, M.; Srur, A. [CONICET, Instituto Argentino de Nivologia, Glaciologia y Ciencias Ambientales (IANIGLA), Mendoza (Argentina); Kuettel, M. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), Bern (Switzerland); University of Bern, Institute of Geography, Climatology and Meteorology, Bern (Switzerland); University of Washington, Department of Earth and Space Sciences, Seattle (United States); Frank, D. [Swiss Federal Research Institute WSL, Birmensdorf (Switzerland); Jones, P.D. [University of East Anglia, Climatic Research Unit, School of Environmental Sciences, Norwich (United Kingdom); Aravena, J.-C. [Centro de Estudios Cuaternarios de Fuego Patagonia y Antartica (CEQUA), Punta Arenas (Chile); Black, D.E. [Stony Brook University, School of Marine and Atmospheric Sciences, Stony Brook (United States); Christie, D.A.; Urrutia, R. [Universidad Austral de Chile Valdivia, Laboratorio de Dendrocronologia, Facultad de Ciencias Forestales y Recursos Naturales, Valdivia (Chile); D' Arrigo, R. [Earth Institute at Columbia University, Tree-Ring Laboratory, Lamont-Doherty Earth Observatory, Palisades, NY (United States); Lara, A. [Universidad Austral de Chile Valdivia, Laboratorio de Dendrocronologia, Facultad de Ciencias Forestales y Recursos Naturales, Valdivia (Chile); Nucleo Cientifico Milenio FORECOS, Fundacion FORECOS, Valdivia (Chile); Soliz-Gamboa, C. [Utrecht Univ., Inst. of Environmental Biology, Utrecht (Netherlands); Gunten, L. von [Univ. of Bern (Switzerland); Univ. of Massachusetts, Climate System Research Center, Amherst (United States)

    2011-07-15

    We statistically reconstruct austral summer (winter) surface air temperature fields back to ad 900 (1706) using 22 (20) annually resolved predictors from natural and human archives from southern South America (SSA). This represents the first regional-scale climate field reconstruction for parts of the Southern Hemisphere at this high temporal resolution. We apply three different reconstruction techniques: multivariate principal component regression, composite plus scaling, and regularized expectation maximization. There is generally good agreement between the results of the three methods on interannual and decadal timescales. The field reconstructions allow us to describe differences and similarities in the temperature evolution of different sub-regions of SSA. The reconstructed SSA mean summer temperatures between 900 and 1350 are mostly above the 1901-1995 climatology. After 1350, we reconstruct a sharp transition to colder conditions, which last until approximately 1700. The summers in the eighteenth century are relatively warm with a subsequent cold relapse peaking around 1850. In the twentieth century, summer temperatures reach conditions similar to earlier warm periods. The winter temperatures in the eighteenth and nineteenth centuries were mostly below the twentieth century average. The uncertainties of our reconstructions are generally largest in the eastern lowlands of SSA, where the coverage with proxy data is poorest. Verifications with independent summer temperature proxies and instrumental measurements suggest that the interannual and multi-decadal variations of SSA temperatures are well captured by our reconstructions. This new dataset can be used for data/model comparison and data assimilation as well as for detection and attribution studies at sub-continental scales. (orig.)

  2. Multiresolution 3-D reconstruction from side-scan sonar images.

    Science.gov (United States)

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  3. The Optimization of Capital Structure in Maximizing Profit and Corporate Value

    Directory of Open Access Journals (Sweden)

    Kharisya Ayu Effendi

    2017-05-01

    Full Text Available The purpose of this research was to determine the optimal capital structure which could maximize profits and corporate value. The benefits of this research were companies knew clearly that optimal capital structure could maximize profits and corporate value. The method used was quantitative descriptive analysis. Moreover, the data used was secondary data in the Jakarta Islamic Index (JII from 2011 to 2015. The results of this research are companies which have optimal capital structure are in line with the trade-off theory models. The capital structure is optimal if the debt levels are to a certain extent so that the corporate value will increase . However, if the debt limit passes the certain degree, profit and corporate value will decrease. Meanwhile, pecking order theory in this research does not conform and cannot be said to be optimal, because of the low debt level describing the opposite result with the theory as low profits.

  4. Functional associations at global brain level during perception of an auditory illusion by applying maximal information coefficient

    Science.gov (United States)

    Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos

    2018-02-01

    Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.

  5. Willingness to pay for anterior cruciate ligament reconstruction.

    Science.gov (United States)

    Hall, Michael P; Chiang-Colvin, Alexis S; Bosco, Joseph A

    2013-01-01

    The outcomes of ACL reconstructions in terms of patient satisfaction and function are well known. Most orthopaedic surgeons feel that Medicare and other payors do not reimburse enough for this surgery. The purpose of this study is to determine how much patients are willing to pay for this surgery and compare it to reimbursement rates. We constructed a survey which described the function and limitations of an ACL deficient knee and the expected function of that knee after an ACL reconstruction. We then asked the volunteers how much they would be willing to pay for an ACL reconstruction if it were their knee. We also gathered data on the yearly earnings and Tegner activity level of the volunteers. In all, 143 volunteers completed the survey. We computed correlation coefficients between willingness to pay and both yearly earnings and Tegner activity level. The average amount that the volunteers were willing to pay for an ACL reconstruction was $4,867.00. There was no correlation between yearly earnings and willingness to pay. The correlation coefficient was 0.34. There was a weak correlation between Tegner activity level and willingness to pay. This correlation coefficient was 0.81. The Medicare allowable rate for ACL reconstruction (CPT 29888) in the geographic area of the study was $1,132.00. The data demonstrates that patients are willing to pay much more than traditional payors for ACL reconstruction. These payors undervalue the benefit of this surgery to the patient. There is increasing pressure on orthopaedic surgeons to not participate in insurance plans that reimburse poorly. This places an increasing financial burden on the patient. This study suggests that patients may be willing to pay more for their surgery than their insurance plan and accept more of this burden.

  6. Online track reconstruction at hadron colliders

    International Nuclear Information System (INIS)

    Amerio, Silvia; Bettini, Marco; Nicoletto, Marino; Crescioli, Francesco; Bucciantonio, Martina; DELL'ORSO, Mauro; Piendibene, Marco; VOLPI, Guido; Annovi, Alberto; Catastini, Pierluigi; Giannetti, Paola; Lucchesi, Donatella

    2010-01-01

    Real time event reconstruction plays a fundamental role in High Energy Physics experiments. Reducing the rate of data to be saved on tape from millions to hundreds per second is critical. In order to increase the purity of the collected samples, rate reduction has to be coupled with the capability to simultaneously perform a first selection of the most interesting events. A fast and efficient online track reconstruction is important to effectively trigger on leptons and/or displaced tracks from b-quark decays. This talk will be an overview of online tracking techniques in different HEP environments: we will show how H1 experiment at HERA faced the challenges of online track reconstruction implementing pattern matching and track linking algorithms on CAMs and FPGAs in the Fast Track Processor (FTT). The pattern recognition technique is also at the basis of the Silicon Vertex Trigger (SVT) at the CDF experiment at Tevatron: coupled to a very fast fitting phase, SVT allows to trigger on displaced tracks, thus greatly increasing the efficiency for the hadronic B decay modes. A recent upgrade of the SVT track fitter, the Giga-fitter, can perform more than 1 fit/ns and further improves the CDF online trigger capabilities at high luminosity. At SLHC, where luminosities will be 2 orders of magnitude greater than Tevatron, online tracking will be much more challenging: we will describe CMS future plans for a Level-1 track trigger and the Fast Tracker (FTK) processor at the ATLAS experiment, based on the Giga-fitter architecture and designed to provide high quality tracks reconstructed over the entire detector in time for a Level-2 trigger decision.luminosity. At SLHC, where luminosities will be 2 orders of magnitude greater than Tevatron, online tracking will be much more challenging: we will describe CMS future plans for a Level-1 track trigger and the Fast Tracker (FTK) processor at the Atlas experiment, based on the Giga-fitter architecture and designed to provide high

  7. The optimal dose reduction level using iterative reconstruction with prospective ECG-triggered coronary CTA using 256-slice MDCT

    International Nuclear Information System (INIS)

    Hou, Yang; Xu, Shu; Guo, Wenli; Vembar, Mani; Guo, Qiyong

    2012-01-01

    Aim: To assess the image quality (IQ) of an iterative reconstruction (IR) technique (iDose 4 ) from prospective electrocardiography (ECG)-triggered coronary computed tomography angiography (coronary CTA) on a 256-slice multi-detector CT (MDCT) scanner and determine the optimal dose reduction using IR that can provide IQ comparable to filtered back projection (FBP). Method and materials: 110 consecutive patients (69 men, 41 women; age: 54 ± 10 years) underwent coronary CTA on a 256-slice MDCT (Brilliance iCT, Philips Healthcare). The control group (Group A, n = 21) were scanned using the conventional tube output (120 kVp, 210 mAs) and reconstructed using FBP. The other 4 groups were scanned with the same kVp but successively reduced tube output as follows: B[n = 15]: 125 mAs; C[n = 22]: 105 mAs; D[n = 36]: 84 mAs: E[n = 16]: 65 mAs) and reconstructed using IR levels of L3 (Group B), L4 (Group C) and L5 (Groups D and E), to compensate for the noise increase. All images were reconstructed using the same kernel (XCB). Two radiologists graded IQ in a blinded fashion on a 4-point scale (4 – excellent, 3 – good, 2 – fair and 1 – poor). Quantitative measurements of CT values, image noise and contrast-to-noise (CNR) were measured in each group. A receiver-operating characteristic (ROC) analysis was performed to determine a radiation reduction threshold up to which excellent IQ was maintained. Results: There were no significant differences in objective noise, SNR and CNR values among Groups A, B, C, D, and E (P = 0.14, 0.09, 0.17, respectively). There were no significant differences in the scores of the subjective IQ between Group A, and Groups B, C, D, E (P = 0.23–0.97). Significant differences in image sharpness and study acceptability were observed between groups A and E (P < 0.05). Using the criterion of excellent IQ (score 4), the ROC curve of dose levels and IQ acceptability established a reduction of 60% of tube output (Group D) as optimum cutoff point (AUC

  8. Feasibility of megavoltage portal CT using an electronic portal imaging device (EPID) and a multi-level scheme algebraic reconstruction technique (MLS-ART)

    International Nuclear Information System (INIS)

    Guan, Huaiqun; Zhu, Yunping

    1998-01-01

    Although electronic portal imaging devices (EPIDs) are efficient tools for radiation therapy verification, they only provide images of overlapped anatomic structures. We investigated using a fluorescent screen/CCD-based EPID, coupled with a novel multi-level scheme algebraic reconstruction technique (MLS-ART), for a feasibility study of portal computed tomography (CT) reconstructions. The CT images might be useful for radiation treatment planning and verification. We used an EPID, set it to work at the linear dynamic range and collimated 6 MV photons from a linear accelerator to a slit beam of 1 cm wide and 25 cm long. We performed scans under a total of ∼200 monitor units (MUs) for several phantoms in which we varied the number of projections and MUs per projection. The reconstructed images demonstrated that using the new MLS-ART technique megavoltage portal CT with a total of 200 MUs can achieve a contrast detectibility of ∼2.5% (object size 5mmx5mm) and a spatial resolution of 2.5 mm. (author)

  9. [Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods: selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter].

    Science.gov (United States)

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-05-01

    In cerebral blood flow tests using N-Isopropyl-p-[123I] Iodoamphetamine "I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method.

  10. Application of N-isopropyl-p-[123I] iodoamphetamine quantification of regional cerebral blood flow using iterative reconstruction methods. Selection of the optimal reconstruction method and optimization of the cutoff frequency of the preprocessing filter

    International Nuclear Information System (INIS)

    Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi

    2013-01-01

    In cerebral blood flow tests using N-Isopropyl-p-[ 123 I] Iodoamphetamine 123 I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method. (author)

  11. Assessment of clinical image quality in paediatric abdominal CT examinations: dependency on the level of adaptive statistical iterative reconstruction (ASiR) and the type of convolution kernel

    International Nuclear Information System (INIS)

    Larsson, Joel; Baath, Magnus; Thilander-Klang, Anne; Ledenius, Kerstin; Caisander, Haakan

    2016-01-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR TM ) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefited from a higher ASiR level. An ASiR level of 70 % together with the Soft TM or Standard TM kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. (authors)

  12. Successful fifth metatarsal bulk autograft reconstruction of thermal necrosis post intramedullary fixation.

    Science.gov (United States)

    Veljkovic, Andrea; Le, Vu; Escudero, Mario; Salat, Peter; Wing, Kevin; Penner, Murray; Younger, Alastair

    2018-03-21

    Reamed intramedullary (IM) screw fixation for proximal fifth metatarsal fractures is technically challenging with potentially devastating complications if basic principles are not followed. A case of an iatrogenic fourth-degree burn after elective reamed IM screw fixation of a proximal fifth metatarsal fracture in a high-level athlete is reported. The case was complicated by postoperative osteomyelitis with third-degree soft-tissue defect. This was successfully treated with staged autologous bone graft reconstruction, tendon reconstruction, and local bi-pedicle flap coverage. The patient returned to competitive-level sports, avoiding the need for fifth ray amputation. Critical points of the IM screw technique and definitive reconstruction are discussed. Bulk autograft reconstruction is a safe and effective alternative to ray amputation in segmental defects of the fifth metatarsal.Level of evidence V.

  13. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    Science.gov (United States)

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  14. Relationship between pre-reconstruction filter and accuracy of registration software based on mutual-information maximization. A study of SPECT-MR brain phantom images

    International Nuclear Information System (INIS)

    Mito, Suzuko; Magota, Keiichi; Arai, Hiroshi; Omote, Hidehiko; Katsuura, Hidenori; Suzuki, Kotaro; Kubo Naoki

    2005-01-01

    Image registration technique is becoming an increasingly important tool in SPECT. Recently, software based on mutual-information maximization has been developed for automatic multimodality image registration. The accuracy of the software is important for its application to image registration. During SPECT reconstruction, the projection data are pre-filtered in order to reduce Poisson noise, commonly using a Butterworth filter. We have investigated the dependence of the absolute accuracy of MRI-SPECT registration on the cut-off frequencies of a range of Butterworth filters. This study used a 3D Hoffman phantom (Model No. 9000, Data-spectrum Co.). For the reference volume, an magnetization prepared rapid gradient echo (MPRage) sequence was performed on a Vision MRI (Siemence, 1.5 T). For the floating volumes, SPECT data of a phantom including 99m Tc 85 kBq/mL were acquired by a GCA-9300 (Toshiba Medical Systems Co.). During SPECT, the orbito-meatal (OM) line of the phantom was tilted by 5 deg and 15 deg to mimic the incline of a patient's head. The projection data were pre-filtered with Butterworth filters (cut-off frequency varying between 0.24 to 0.94 cycles/cm in 0.02 steps, order 8). The automated registrations were performed using iNRT β version software (Nihon Medi. Co.) and the rotation angles of SPECT for registration were noted. In this study, the registrations of all SPECT data were successful. Graphs of registration rotation angles against cut-off frequencies were scattered and showed no correlation between the two. The registration rotation angles ranged with changing cut-off frequency from -0.4 deg to +3.8 deg at a 5 deg tilt and from +12.7 deg to +19.6 deg at a 15 deg tilt. The registration rotation angles showed variation even for slight differences in cut-off frequencies. The absolute errors were a few degrees for any cut-off frequency. Regardless of the cut-off frequency, automatic registration using this software provides similar results. (author)

  15. Optical image reconstruction using DC data: simulations and experiments

    International Nuclear Information System (INIS)

    Huabei Jiang; Paulsen, K.D.; Oesterberg, U.L.

    1996-01-01

    In this paper, we explore optical image formation using a diffusion approximation of light propagation in tissue which is modelled with a finite-element method for optically heterogeneous media. We demonstrate successful image reconstruction based on absolute experimental DC data obtained with a continuous wave 633 nm He-Ne laser system and a 751 nm diode laser system in laboratory phantoms having two optically distinct regions. The experimental systems used exploit a tomographic type of data collection scheme that provides information from which a spatially variable optical property map is deduced. Reconstruction of scattering coefficient only and simultaneous reconstruction of both scattering and absorption profiles in tissue-like phantoms are obtained from measured and simulated data. Images with different contrast levels between the heterogeneity and the background are also reported and the results show that although it is possible to obtain qualitative visual information on the location and size of a heterogeneity, it may not be possible to quantitatively resolve contrast levels or optical properties using reconstructions from DC data only. Sensitivity of image reconstruction to noise in the measurement data is investigated through simulations. The application of boundary constraints has also been addressed. (author)

  16. Renal Cyst Pseudoenhancement: Intraindividual Comparison Between Virtual Monochromatic Spectral Images and Conventional Polychromatic 120-kVp Images Obtained During the Same CT Examination and Comparisons Among Images Reconstructed Using Filtered Back Projection, Adaptive Statistical Iterative Reconstruction, and Model-Based Iterative Reconstruction

    Science.gov (United States)

    Yamada, Yoshitake; Yamada, Minoru; Sugisawa, Koichi; Akita, Hirotaka; Shiomi, Eisuke; Abe, Takayuki; Okuda, Shigeo; Jinzaki, Masahiro

    2015-01-01

    Abstract The purpose of this study was to compare renal cyst pseudoenhancement between virtual monochromatic spectral (VMS) and conventional polychromatic 120-kVp images obtained during the same abdominal computed tomography (CT) examination and among images reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Our institutional review board approved this prospective study; each participant provided written informed consent. Thirty-one patients (19 men, 12 women; age range, 59–85 years; mean age, 73.2 ± 5.5 years) with renal cysts underwent unenhanced 120-kVp CT followed by sequential fast kVp-switching dual-energy (80/140 kVp) and 120-kVp abdominal enhanced CT in the nephrographic phase over a 10-cm scan length with a random acquisition order and 4.5-second intervals. Fifty-one renal cysts (maximal diameter, 18.0 ± 14.7 mm [range, 4–61 mm]) were identified. The CT attenuation values of the cysts as well as of the kidneys were measured on the unenhanced images, enhanced VMS images (at 70 keV) reconstructed using FBP and ASIR from dual-energy data, and enhanced 120-kVp images reconstructed using FBP, ASIR, and MBIR. The results were analyzed using the mixed-effects model and paired t test with Bonferroni correction. The attenuation increases (pseudoenhancement) of the renal cysts on the VMS images reconstructed using FBP/ASIR (least square mean, 5.0/6.0 Hounsfield units [HU]; 95% confidence interval, 2.6–7.4/3.6–8.4 HU) were significantly lower than those on the conventional 120-kVp images reconstructed using FBP/ASIR/MBIR (least square mean, 12.1/12.8/11.8 HU; 95% confidence interval, 9.8–14.5/10.4–15.1/9.4–14.2 HU) (all P < .001); on the other hand, the CT attenuation values of the kidneys on the VMS images were comparable to those on the 120-kVp images. Regardless of the reconstruction algorithm, 70-keV VMS images showed

  17. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  18. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  19. Optimal ranking regime analysis of TreeFlow dendrohydrological reconstructions

    Science.gov (United States)

    The Optimal Ranking Regime (ORR) method was used to identify 6-100 year time windows containing significant ranking sequences in 55 western U.S. streamflow reconstructions, and reconstructions of the level of the Great Salt Lake and San Francisco Bay salinity during 1500-2007. The method’s ability t...

  20. Evaluation of imaging protocol for ECT based on CS image reconstruction algorithm

    International Nuclear Information System (INIS)

    Zhou Xiaolin; Yun Mingkai; Cao Xuexiang; Liu Shuangquan; Wang Lu; Huang Xianchao; Wei Long

    2014-01-01

    Single-photon emission computerized tomography and positron emission tomography are essential medical imaging tools, for which the sampling angle number and scan time should be carefully chosen to give a good compromise between image quality and radiopharmaceutical dose. In this study, the image quality of different acquisition protocols was evaluated via varied angle number and count number per angle with Monte Carlo simulation data. It was shown that, when similar imaging counts were used, the factor of acquisition counts was more important than that of the sampling number in emission computerized tomography. To further reduce the activity requirement and the scan duration, an iterative image reconstruction algorithm for limited-view and low-dose tomography based on compressed sensing theory has been developed. The total variation regulation was added to the reconstruction process to improve the signal to noise Ratio and reduce artifacts caused by the limited angle sampling. Maximization of the maximum likelihood of the estimated image and the measured data and minimization of the total variation of the image are alternatively implemented. By using this advanced algorithm, the reconstruction process is able to achieve image quality matching or exceed that of normal scans with only half of the injection radiopharmaceutical dose. (authors)

  1. Patient-specific reconstruction plates are the missing link in computer-assisted mandibular reconstruction: A showcase for technical description.

    Science.gov (United States)

    Cornelius, Carl-Peter; Smolka, Wenko; Giessler, Goetz A; Wilde, Frank; Probst, Florian A

    2015-06-01

    Preoperative planning of mandibular reconstruction has moved from mechanical simulation by dental model casts or stereolithographic models into an almost completely virtual environment. CAD/CAM applications allow a high level of accuracy by providing a custom template-assisted contouring approach for bone flaps. However, the clinical accuracy of CAD reconstruction is limited by the use of prebent reconstruction plates, an analogue step in an otherwise digital workstream. In this paper the integration of computerized, numerically-controlled (CNC) milled, patient-specific mandibular plates (PSMP) within the virtual workflow of computer-assisted mandibular free fibula flap reconstruction is illustrated in a clinical case. Intraoperatively, the bone segments as well as the plate arms showed a very good fit. Postoperative CT imaging demonstrated close approximation of the PSMP and fibular segments, and good alignment of native mandible and fibular segments and intersegmentally. Over a follow-up period of 12 months, there was an uneventful course of healing with good bony consolidation. The virtual design and automated fabrication of patient-specific mandibular reconstruction plates provide the missing link in the virtual workflow of computer-assisted mandibular free fibula flap reconstruction. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  2. Resolution effects in reconstructing ancestral genomes.

    Science.gov (United States)

    Zheng, Chunfang; Jeong, Yuji; Turcotte, Madisyn Gabrielle; Sankoff, David

    2018-05-09

    The reconstruction of ancestral genomes must deal with the problem of resolution, necessarily involving a trade-off between trying to identify genomic details and being overwhelmed by noise at higher resolutions. We use the median reconstruction at the synteny block level, of the ancestral genome of the order Gentianales, based on coffee, Rhazya stricta and grape, to exemplify the effects of resolution (granularity) on comparative genomic analyses. We show how decreased resolution blurs the differences between evolving genomes, with respect to rate, mutational process and other characteristics.

  3. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  4. Denoising multicriterion iterative reconstruction in emission spectral tomography

    Science.gov (United States)

    Wan, Xiong; Yin, Aihan

    2007-03-01

    In the study of optical testing, the computed tomogaphy technique has been widely adopted to reconstruct three-dimensional distributions of physical parameters of various kinds of fluid fields, such as flame, plasma, etc. In most cases, projection data are often stained by noise due to environmental disturbance, instrumental inaccuracy, and other random interruptions. To improve the reconstruction performance in noisy cases, an algorithm that combines a self-adaptive prefiltering denoising approach (SPDA) with a multicriterion iterative reconstruction (MCIR) is proposed and studied. First, the level of noise is approximately estimated with a frequency domain statistical method. Then the cutoff frequency of a Butterworth low-pass filter was established based on the evaluated noise energy. After the SPDA processing, the MCIR algorithm was adopted for limited-view optical computed tomography reconstruction. Simulated reconstruction of two test phantoms and a flame emission spectral tomography experiment were employed to evaluate the performance of SPDA-MCIR in noisy cases. Comparison with some traditional methods and experiment results showed that the SPDA-MCIR combination had obvious improvement in the case of noisy data reconstructions.

  5. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    International Nuclear Information System (INIS)

    Leggett, C; Jackson, K; Tatarkhanov, M; Yao, Y; Binet, S; Levinthal, D

    2011-01-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  6. Effect of hybrid iterative reconstruction technique on quantitative and qualitative image analysis at 256-slice prospective gating cardiac CT

    International Nuclear Information System (INIS)

    Utsunomiya, Daisuke; Weigold, W. Guy; Weissman, Gaby; Taylor, Allen J.

    2012-01-01

    To evaluate the effect of hybrid iterative reconstruction on qualitative and quantitative parameters at 256-slice cardiac CT. Prospective cardiac CT images from 20 patients were analysed. Paired image sets were created using 3 reconstructions, i.e. filtered back projection (FBP) and moderate- and high-level iterative reconstructions. Quantitative parameters including CT-attenuation, noise, and contrast-to-noise ratio (CNR) were determined in both proximal- and distal coronary segments. Image quality was graded on a 4-point scale. Coronary CT attenuation values were similar for FBP, moderate- and high-level iterative reconstruction at 293 ± 74-, 290 ± 75-, and 283 ± 78 Hounsfield units (HU), respectively. CNR was significantly higher with moderate- and high-level iterative reconstructions (10.9 ± 3.5 and 18.4 ± 6.2, respectively) than FBP (8.2 ± 2.5) as was the visual grading of proximal vessels. Visualisation of distal vessels was better with high-level iterative reconstruction than FBP. The mean number of assessable segments among 289 segments was 245, 260, and 267 for FBP, moderate- and high-level iterative reconstruction, respectively; the difference between FBP and high-level iterative reconstruction was significant. Interobserver agreement was significantly higher for moderate- and high-level iterative reconstruction than FBP. Cardiac CT using hybrid iterative reconstruction yields higher CNR and better image quality than FBP. circle Cardiac CT helps clinicians to assess patients with coronary artery disease circle Hybrid iterative reconstruction provides improved cardiac CT image quality circle Hybrid iterative reconstruction improves the number of assessable coronary segments circle Hybrid iterative reconstruction improves interobserver agreement on cardiac CT. (orig.)

  7. Pelvic reconstruction with allogeneic bone graft after tumor resection

    Science.gov (United States)

    Wang, Wei; Bi, Wen Zhi; Yang, Jing; Han, Gang; Jia, Jin Peng

    2013-01-01

    OBJECTIVES : Pelvic reconstruction after tumor resection is challenging. METHODS: A retrospective study had been preformed to compare the outcomes among patients who received pelvic reconstructive surgery with allogeneic bone graft after en bloc resection of pelvic tumors and patients who received en bloc resection only. RESULTS: Patients without reconstruction had significantly lower functional scores at 3 months (10 vs. 15, P = 0.001) and 6 months after surgery (18.5 vs. 22, P = 0.0024), a shorter duration of hospitalization (16 day vs. 40 days, P 0.05). CONCLUSIONS : Pelvic reconstruction with allogeneic bone graft after surgical management of pelvic tumors is associated with satisfactory surgical and functional outcomes. Further clinical studies are required to explore how to select the best reconstruction method. Level of Evidence IV, Case Series. PMID:24453659

  8. Selected event reconstruction algorithms for the CBM experiment at FAIR

    International Nuclear Information System (INIS)

    Lebedev, Semen; Höhne, Claudia; Lebedev, Andrey; Ososkov, Gennady

    2014-01-01

    Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms are presented. Optimization of the algorithms in the following CBM detectors are discussed: Ring Imaging Cherenkov (RICH) detector, Transition Radiation Detectors (TRD) and Muon Chamber (MUCH). The ring reconstruction algorithm in the RICH is discussed. In TRD and MUCH track reconstruction algorithms are based on track following and Kalman Filter methods. All algorithms were significantly optimized to achieve maximum speed up and minimum memory consumption. Obtained results showed that a significant speed up factor for all algorithms was achieved and the reconstruction efficiency stays at high level.

  9. New weighting methods for phylogenetic tree reconstruction using multiple loci.

    Science.gov (United States)

    Misawa, Kazuharu; Tajima, Fumio

    2012-08-01

    Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.

  10. Reconstructing Global-scale Ionospheric Outflow With a Satellite Constellation

    Science.gov (United States)

    Liemohn, M. W.; Welling, D. T.; Jahn, J. M.; Valek, P. W.; Elliott, H. A.; Ilie, R.; Khazanov, G. V.; Glocer, A.; Ganushkina, N. Y.; Zou, S.

    2017-12-01

    The question of how many satellites it would take to accurately map the spatial distribution of ionospheric outflow is addressed in this study. Given an outflow spatial map, this image is then reconstructed from a limited number virtual satellite pass extractions from the original values. An assessment is conducted of the goodness of fit as a function of number of satellites in the reconstruction, placement of the satellite trajectories relative to the polar cap and auroral oval, season and universal time (i.e., dipole tilt relative to the Sun), geomagnetic activity level, and interpolation technique. It is found that the accuracy of the reconstructions increases sharply from one to a few satellites, but then improves only marginally with additional spacecraft beyond 4. Increased dwell time of the satellite trajectories in the auroral zone improves the reconstruction, therefore a high-but-not-exactly-polar orbit is most effective for this task. Local time coverage is also an important factor, shifting the auroral zone to different locations relative to the virtual satellite orbit paths. The expansion and contraction of the polar cap and auroral zone with geomagnetic activity influences the coverage of the key outflow regions, with different optimal orbit configurations for each level of activity. Finally, it is found that reconstructing each magnetic latitude band individually produces a better fit to the original image than 2-D image reconstruction method (e.g., triangulation). A high-latitude, high-altitude constellation mission concept is presented that achieves acceptably accurate outflow reconstructions.

  11. Application of conifer needles in the reconstruction of Holocene CO2 levels

    NARCIS (Netherlands)

    Kouwenberg, L.L.R.

    1973-01-01

    To clarify the nature of the link between CO2 and climate on relatively short time-scales, precise, high-resolution reconstructions of the pre-industrial evolution of atmospheric CO2 are required. Adjustment of stomatal frequency to changes in atmospheric CO2 allows plants of many species to retain

  12. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  13. Minimal and Maximal Operator Space Structures on Banach Spaces

    OpenAIRE

    P., Vinod Kumar; Balasubramani, M. S.

    2014-01-01

    Given a Banach space $X$, there are many operator space structures possible on $X$, which all have $X$ as their first matrix level. Blecher and Paulsen identified two extreme operator space structures on $X$, namely $Min(X)$ and $Max(X)$ which represents respectively, the smallest and the largest operator space structures admissible on $X$. In this note, we consider the subspace and the quotient space structure of minimal and maximal operator spaces.

  14. Tomographic reconstruction with B-splines surfaces

    International Nuclear Information System (INIS)

    Oliveira, Eric F.; Dantas, Carlos C.; Melo, Silvio B.; Mota, Icaro V.; Lira, Mailson

    2011-01-01

    Algebraic reconstruction techniques when applied to a limited number of data usually suffer from noise caused by the process of correction or by inconsistencies in the data coming from the stochastic process of radioactive emission and oscillation equipment. The post - processing of the reconstructed image with the application of filters can be done to mitigate the presence of noise. In general these processes also attenuate the discontinuities present in edges that distinguish objects or artifacts, causing excessive blurring in the reconstructed image. This paper proposes a built-in noise reduction at the same time that it ensures adequate smoothness level in the reconstructed surface, representing the unknowns as linear combinations of elements of a piecewise polynomial basis, i.e. a B-splines basis. For that, the algebraic technique ART is modified to accommodate the first, second and third degree bases, ensuring C 0 , C 1 and C 2 smoothness levels, respectively. For comparisons, three methodologies are applied: ART, ART post-processed with regular B-splines filters (ART*) and the proposed method with the built-in B-splines filter (BsART). Simulations with input data produced from common mathematical phantoms were conducted. For the phantoms used the BsART method consistently presented the smallest errors, among the three methods. This study has shown the superiority of the change made to embed the filter in the ART when compared to the post-filtered ART. (author)

  15. Real-Time 3d Reconstruction from Images Taken from AN Uav

    Science.gov (United States)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  16. Evaluation of two methods for using MR information in PET reconstruction

    International Nuclear Information System (INIS)

    Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.

    2013-01-01

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed

  17. Solutions for autonomy and reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wilming, Wilhelm

    2011-07-01

    Stand-alone systems, whether solar home or pico solar systems, have reached a cost level at which they are an increasingly interesting option for wide-area development in grid-remote regions or for reconstruction where the previous grid infrastructure has been destroyed. (orig.)

  18. Sex differences in autonomic function following maximal exercise.

    Science.gov (United States)

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2015-01-01

    Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.

  19. Complementary frame reconstruction: a low-biased dynamic PET technique for low count density data in projection space

    International Nuclear Information System (INIS)

    Hong, Inki; Cho, Sanghee; Michel, Christian J; Casey, Michael E; Schaefferkoetter, Joshua D

    2014-01-01

    A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15 O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues. (paper)

  20. Algebraic reconstruction techniques for spectral reconstruction in diffuse optical tomography

    International Nuclear Information System (INIS)

    Brendel, Bernhard; Ziegler, Ronny; Nielsen, Tim

    2008-01-01

    Reconstruction in diffuse optical tomography (DOT) necessitates solving the diffusion equation, which is nonlinear with respect to the parameters that have to be reconstructed. Currently applied solving methods are based on the linearization of the equation. For spectral three-dimensional reconstruction, the emerging equation system is too large for direct inversion, but the application of iterative methods is feasible. Computational effort and speed of convergence of these iterative methods are crucial since they determine the computation time of the reconstruction. In this paper, the iterative methods algebraic reconstruction technique (ART) and conjugated gradients (CGs) as well as a new modified ART method are investigated for spectral DOT reconstruction. The aim of the modified ART scheme is to speed up the convergence by considering the specific conditions of spectral reconstruction. As a result, it converges much faster to favorable results than conventional ART and CG methods

  1. Jet Vertex Charge Reconstruction

    CERN Document Server

    Nektarijevic, Snezana; The ATLAS collaboration

    2015-01-01

    A newly developed algorithm called the jet vertex charge tagger, aimed at identifying the sign of the charge of jets containing $b$-hadrons, referred to as $b$-jets, is presented. In addition to the well established track-based jet charge determination, this algorithm introduces the so-called \\emph{jet vertex charge} reconstruction, which exploits the charge information associated to the displaced vertices within the jet. Furthermore, the charge of a soft muon contained in the jet is taken into account when available. All available information is combined into a multivariate discriminator. The algorithm has been developed on jets matched to generator level $b$-hadrons provided by $t\\bar{t}$ events simulated at $\\sqrt{s}$=13~TeV using the full ATLAS detector simulation and reconstruction.

  2. Eccentric exercise decreases maximal insulin action in humans

    DEFF Research Database (Denmark)

    Asp, Svend; Daugaard, J R; Kristiansen, S

    1996-01-01

    subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...

  3. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  4. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    Science.gov (United States)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging

  5. Detector independent cellular automaton algorithm for track reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kisel, Ivan; Kulakov, Igor; Zyzak, Maksym [Goethe Univ. Frankfurt am Main (Germany); Frankfurt Institute for Advanced Studies, Frankfurt am Main (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH (Germany); Collaboration: CBM-Collaboration

    2013-07-01

    Track reconstruction is one of the most challenging problems of data analysis in modern high energy physics (HEP) experiments, which have to process per second of the order of 10{sup 7} events with high track multiplicity and density, registered by detectors of different types and, in many cases, located in non-homogeneous magnetic field. Creation of reconstruction package common for all experiments is considered to be important in order to consolidate efforts. The cellular automaton (CA) track reconstruction approach has been used successfully in many HEP experiments. It is very simple, efficient, local and parallel. Meanwhile it is intrinsically independent of detector geometry and good candidate for common track reconstruction. The CA implementation for the CBM experiment has been generalized and applied to the ALICE ITS and STAR HFT detectors. Tests with simulated collisions have been performed. The track reconstruction efficiencies are at the level of 95% for majority of the signal tracks for all detectors.

  6. Evaluation of the spline reconstruction technique for PET

    Energy Technology Data Exchange (ETDEWEB)

    Kastis, George A., E-mail: gkastis@academyofathens.gr; Kyriakopoulou, Dimitra [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Fernández, Yolanda [Centre d’Imatge Molecular Experimental (CIME), CETIR-ERESA, Barcelona 08950 (Spain); Hutton, Brian F. [Institute of Nuclear Medicine, University College London, London NW1 2BU (United Kingdom); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA (United Kingdom)

    2014-04-15

    Purpose: The spline reconstruction technique (SRT), based on the analytic formula for the inverse Radon transform, has been presented earlier in the literature. In this study, the authors present an improved formulation and numerical implementation of this algorithm and evaluate it in comparison to filtered backprojection (FBP). Methods: The SRT is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of “custom made” cubic splines. By restricting reconstruction only within object pixels and by utilizing certain mathematical symmetries, the authors achieve a reconstruction time comparable to that of FBP. The authors have implemented SRT in STIR and have evaluated this technique using simulated data from a clinical positron emission tomography (PET) system, as well as real data obtained from clinical and preclinical PET scanners. For the simulation studies, the authors have simulated sinograms of a point-source and three digital phantoms. Using these sinograms, the authors have created realizations of Poisson noise at five noise levels. In addition to visual comparisons of the reconstructed images, the authors have determined contrast and bias for different regions of the phantoms as a function of noise level. For the real-data studies, sinograms of an{sup 18}F-FDG injected mouse, a NEMA NU 4-2008 image quality phantom, and a Derenzo phantom have been acquired from a commercial PET system. The authors have determined: (a) coefficient of variations (COV) and contrast from the NEMA phantom, (b) contrast for the various sections of the Derenzo phantom, and (c) line profiles for the Derenzo phantom. Furthermore, the authors have acquired sinograms from a whole-body PET scan of an {sup 18}F-FDG injected cancer patient, using the GE Discovery ST PET/CT system. SRT and FBP reconstructions of the thorax have been visually evaluated. Results: The results indicate an improvement in FWHM and FWTM in both simulated and real

  7. Model-based image reconstruction for four-dimensional PET

    International Nuclear Information System (INIS)

    Li Tianfang; Thorndyke, Brian; Schreibmann, Eduard; Yang Yong; Xing Lei

    2006-01-01

    Positron emission tonography (PET) is useful in diagnosis and radiation treatment planning for a variety of cancers. For patients with cancers in thoracic or upper abdominal region, the respiratory motion produces large distortions in the tumor shape and size, affecting the accuracy in both diagnosis and treatment. Four-dimensional (4D) (gated) PET aims to reduce the motion artifacts and to provide accurate measurement of the tumor volume and the tracer concentration. A major issue in 4D PET is the lack of statistics. Since the collected photons are divided into several frames in the 4D PET scan, the quality of each reconstructed frame degrades as the number of frames increases. The increased noise in each frame heavily degrades the quantitative accuracy of the PET imaging. In this work, we propose a method to enhance the performance of 4D PET by developing a new technique of 4D PET reconstruction with incorporation of an organ motion model derived from 4D-CT images. The method is based on the well-known maximum-likelihood expectation-maximization (ML-EM) algorithm. During the processes of forward- and backward-projection in the ML-EM iterations, all projection data acquired at different phases are combined together to update the emission map with the aid of deformable model, the statistics is therefore greatly improved. The proposed algorithm was first evaluated with computer simulations using a mathematical dynamic phantom. Experiment with a moving physical phantom was then carried out to demonstrate the accuracy of the proposed method and the increase of signal-to-noise ratio over three-dimensional PET. Finally, the 4D PET reconstruction was applied to a patient case

  8. ASSESSMENT OF CLINICAL IMAGE QUALITY IN PAEDIATRIC ABDOMINAL CT EXAMINATIONS: DEPENDENCY ON THE LEVEL OF ADAPTIVE STATISTICAL ITERATIVE RECONSTRUCTION (ASiR) AND THE TYPE OF CONVOLUTION KERNEL.

    Science.gov (United States)

    Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne

    2016-06-01

    The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Pediatric Lower Extremity Lawn Mower Injuries and Reconstruction: Retrospective 10-Year Review at a Level 1 Trauma Center.

    Science.gov (United States)

    Branch, Leslie G; Crantford, John C; Thompson, James T; Tannan, Shruti C

    2017-11-01

    From 2004 to 2013, there were 9341 lawn mower injuries in children under 20 years old. The incidence of lawn mower injuries in children has not decreased since 1990 despite implementation of various different prevention strategies. In this report, the authors review the results of pediatric lawn mower-related lower-extremity injuries treated at a tertiary care referral center as well as review the overall literature. A retrospective review was performed at a level 1 trauma center over a 10-year period (2005-2015). Patients younger than 18 years who presented to the emergency room with lower extremity lawn mower injuries were included. Of the 27 patients with lower-extremity lawn mower injuries during this period, the mean age at injury was 5.5 years and Injury Severity Score was 7.2. Most (85%) patients were boys and the predominant type of mower causing injury was a riding lawn mower (96%). Injury occurred in patients who were bystanders in 78%, passengers in 11%, and operators in 11%. Mean length of stay was 12.2 days, and mean time to reconstruction was 7.9 days. Mean number of surgical procedures per patient was 4.1. Amputations occurred in 15 (56%) cases with the most common level of amputation being distal to the metatarsophalangeal joint (67%). Reconstructive procedures ranged from direct closure (41%) to free tissue transfer (7%). Major complications included infection (7%), wound dehiscence (11%), and delayed wound healing (15%). Mean follow up was 23.6 months and 100% of the patients were ambulatory after injury. The subgroup of patients with the most severe injuries, highest number of amputations, and need for overall surgical procedures were patients aged 2 to 5 years. A review of the literature also showed consistent findings. This study demonstrates the danger and morbidity that lawn mowers present to the pediatric population, particularly children aged 2 to 5 years. Every rung of the so-called reconstructive ladder is used in caring for these

  10. Reconstruction and modernization of Novi Han radioactive waste repository

    International Nuclear Information System (INIS)

    Kolev, I.; Dralchev, D.; Spasov, P.; Jordanov, M.

    2000-01-01

    This report presents briefly the most important issues of the study performed by EQE - Bulgaria. The objectives of the study are the development of conceptual solutions for construction of the following facilities in the Novi Han radioactive waste repository: an operational storage for unconditioned high level spent sources; new temporary buildings over the existing radioactive waste storage facilities; a rain-water draining system ect. The study also includes the engineering solutions for conservation of the existing facilities, currently full with high level spent sources. A 'Program for reconstruction and modernization' has been created, including the analysis of some regulation aspects concerning this program implementation. In conclusions the engineering problems of Novi Han repository are clear and appropriate solutions are available. They can be implemented in both cases of 'small' or 'large' reconstruction. The reconstruction project anyway should start with the construction of a new site infrastructure. Reconstruction and modernization of Novi Han radioactive waste repository is the only way to improve the management and safety of radioactive waste from medicine, industry and scientific research in Bulgaria

  11. Structural biomechanics of the craniomaxillofacial skeleton under maximal masticatory loading: Inferences and critical analysis based on a validated computational model.

    Science.gov (United States)

    Pakdel, Amir R; Whyne, Cari M; Fialkov, Jeffrey A

    2017-06-01

    The trend towards optimizing stabilization of the craniomaxillofacial skeleton (CMFS) with the minimum amount of fixation required to achieve union, and away from maximizing rigidity, requires a quantitative understanding of craniomaxillofacial biomechanics. This study uses computational modeling to quantify the structural biomechanics of the CMFS under maximal physiologic masticatory loading. Using an experimentally validated subject-specific finite element (FE) model of the CMFS, the patterns of stress and strain distribution as a result of physiological masticatory loading were calculated. The trajectories of the stresses were plotted to delineate compressive and tensile regimes over the entire CMFS volume. The lateral maxilla was found to be the primary vertical buttress under maximal bite force loading, with much smaller involvement of the naso-maxillary buttress. There was no evidence that the pterygo-maxillary region is a buttressing structure, counter to classical buttress theory. The stresses at the zygomatic sutures suggest that two-point fixation of zygomatic complex fractures may be sufficient for fixation under bite force loading. The current experimentally validated biomechanical FE model of the CMFS is a practical tool for in silico optimization of current practice techniques and may be used as a foundation for the development of design criteria for future technologies for the treatment of CMFS injury and disease. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Bayesian image reconstruction in SPECT using higher order mechanical models as priors

    International Nuclear Information System (INIS)

    Lee, S.J.; Gindi, G.; Rangarajan, A.

    1995-01-01

    While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques

  13. Reconstruction of financial networks for robust estimation of systemic risk

    Science.gov (United States)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-03-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks.

  14. Reconstruction of financial networks for robust estimation of systemic risk

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Zarinelli, Elia; Marsili, Matteo

    2012-01-01

    In this paper we estimate the propagation of liquidity shocks through interbank markets when the information about the underlying credit network is incomplete. We show that techniques such as maximum entropy currently used to reconstruct credit networks severely underestimate the risk of contagion by assuming a trivial (fully connected) topology, a type of network structure which can be very different from the one empirically observed. We propose an efficient message-passing algorithm to explore the space of possible network structures and show that a correct estimation of the network degree of connectedness leads to more reliable estimations for systemic risk. Such an algorithm is also able to produce maximally fragile structures, providing a practical upper bound for the risk of contagion when the actual network structure is unknown. We test our algorithm on ensembles of synthetic data encoding some features of real financial networks (sparsity and heterogeneity), finding that more accurate estimations of risk can be achieved. Finally we find that this algorithm can be used to control the amount of information that regulators need to require from banks in order to sufficiently constrain the reconstruction of financial networks

  15. Multi-modality image reconstruction for dual-head small-animal PET

    International Nuclear Information System (INIS)

    Huang, Chang-Han; Chou, Cheng-Ying

    2015-01-01

    The hybrid positron emission tomography/computed tomography (PET/CT) or positron emission tomography/magnetic resonance imaging (PET/MRI) has become routine practice in clinics. The applications of multi-modality imaging can also benefit research advances. Consequently, dedicated small-imaging system like dual-head small-animal PET (DHAPET) that possesses the advantages of high detection sensitivity and high resolution can exploit the structural information from CT or MRI. It should be noted that the special detector arrangement in DHAPET leads to severe data truncation, thereby degrading the image quality. We proposed to take advantage of anatomical priors and total variation (TV) minimization methods to reconstruct PET activity distribution form incomplete measurement data. The objective is to solve the penalized least-squares function consisted of data fidelity term, TV norm and medium root priors. In this work, we employed the splitting-based fast iterative shrinkage/thresholding algorithm to split smooth and non-smooth functions in the convex optimization problems. Our simulations studies validated that the images reconstructed by use of the proposed method can outperform those obtained by use of conventional expectation maximization algorithms or that without considering the anatomical prior information. Additionally, the convergence rate is also accelerated.

  16. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  17. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  18. ACTS: from ATLAS software towards a common track reconstruction software

    Science.gov (United States)

    Gumpert, C.; Salzburger, A.; Kiehn, M.; Hrdinka, J.; Calace, N.; ATLAS Collaboration

    2017-10-01

    Reconstruction of charged particles’ trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.

  19. Non-perturbative renormalization in coordinate space for N{sub f}=2 maximally twisted mass fermions with tree-level Symanzik improved gauge action

    Energy Technology Data Exchange (ETDEWEB)

    Cichy, Krzysztof [DESY, Zeuthen (Germany). NIC; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [DESY, Zeuthen (Germany). NIC; Korcyl, Piotr [DESY, Zeuthen (Germany). NIC; Jagiellonian Univ., Krakow (Poland). M. Smoluchowski Inst. of Physics

    2012-07-15

    We present results of a lattice QCD application of a coordinate space renormalization scheme for the extraction of renormalization constants for flavour non-singlet bilinear quark operators. The method consists in the analysis of the small-distance behaviour of correlation functions in Euclidean space and has several theoretical and practical advantages, in particular: it is gauge invariant, easy to implement and has relatively low computational cost. The values of renormalization constants in the X-space scheme can be converted to the MS scheme via 4-loop continuum perturbative formulae. Our results for N{sub f}=2 maximally twisted mass fermions with tree-level Symanzik improved gauge action are compared to the ones from the RI-MOM scheme and show full agreement with this method. (orig.)

  20. Reconstructing the CT number array from gray-level images and its application in PACS

    Science.gov (United States)

    Chen, Xu; Zhuang, Tian-ge; Wu, Wei

    2001-08-01

    Although DICOM compliant computed tomography has been prevailing in medical fields nowadays, there are some incompliant ones, from which we could hardly get the raw data and make an apropos interpretation due to the proprietary image format. Under such condition, one usually uses frame grabbers to capture CT images, the results of which could not be freely adjusted by radiologists as the original CT number array could. To alleviate the inflexibility, a new method is presented in this paper to reconstruct the array of CT number from several gray-level images acquired under different window settings. Its feasibility is investigated and a few tips are put forward to correct the errors caused respectively by 'Border Effect' and some hardware problems. The accuracy analysis proves it a good substitution for original CT number array acquisition. And this method has already been successfully used in our newly developing PACS and accepted by the radiologists in clinical use.

  1. SU-D-18A-02: Towards Real-Time On-Board Volumetric Image Reconstruction for Intrafraction Target Verification in Radiation Therapy

    International Nuclear Information System (INIS)

    Xu, X; Iliopoulos, A; Zhang, Y; Pitsianis, N; Sun, X; Yin, F; Ren, L

    2014-01-01

    Purpose: To expedite on-board volumetric image reconstruction from limited-angle kV—MV projections for intrafraction verification. Methods: A limited-angle intrafraction verification (LIVE) system has recently been developed for real-time volumetric verification of moving targets, using limited-angle kV—MV projections. Currently, it is challenged by the intensive computational load of the prior-knowledge-based reconstruction method. To accelerate LIVE, we restructure the software pipeline to make it adaptable to model and algorithm parameter changes, while enabling efficient utilization of rapidly advancing, modern computer architectures. In particular, an innovative two-level parallelization scheme has been designed: At the macroscopic level, data and operations are adaptively partitioned, taking into account algorithmic parameters and the processing capacity or constraints of underlying hardware. The control and data flows of the pipeline are scheduled in such a way as to maximize operation concurrency and minimize total processing time. At the microscopic level, the partitioned functions act as independent modules, operating on data partitions in parallel. Each module is pre-parallelized and optimized for multi-core processors (CPUs) and graphics processing units (GPUs). Results: We present results from a parallel prototype, where most of the controls and module parallelization are carried out via Matlab and its Parallel Computing Toolbox. The reconstruction is 5 times faster on a data-set of twice the size, compared to recently reported results, without compromising on algorithmic optimization control. Conclusion: The prototype implementation and its results have served to assess the efficacy of our system concept. While a production implementation will yield much higher processing rates by approaching full-capacity utilization of CPUs and GPUs, some mutual constraints between algorithmic flow and architecture specifics remain. Based on a careful analysis

  2. A Late Pleistocene sea level stack

    OpenAIRE

    Spratt Rachel M; Lisiecki Lorraine E

    2016-01-01

    Late Pleistocene sea level has been reconstructed from ocean sediment core data using a wide variety of proxies and models. However, the accuracy of individual reconstructions is limited by measurement error, local variations in salinity and temperature, and assumptions particular to each technique. Here we present a sea level stack (average) which increases the signal-to-noise ratio of individual reconstructions. Specifically, we perform principal componen...

  3. FUSE: a profit maximization approach for functional summarization of biological networks.

    Science.gov (United States)

    Seah, Boon-Siew; Bhowmick, Sourav S; Dewey, C Forbes; Yu, Hanry

    2012-03-21

    The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein interaction network (PPI) using graph theoretic analysis. Despite the recent progress, systems level analysis of PPIS remains a daunting task as it is challenging to make sense out of the deluge of high-dimensional interaction data. Specifically, techniques that automatically abstract and summarize PPIS at multiple resolutions to provide high level views of its functional landscape are still lacking. We present a novel data-driven and generic algorithm called FUSE (Functional Summary Generator) that generates functional maps of a PPI at different levels of organization, from broad process-process level interactions to in-depth complex-complex level interactions, through a pro t maximization approach that exploits Minimum Description Length (MDL) principle to maximize information gain of the summary graph while satisfying the level of detail constraint. We evaluate the performance of FUSE on several real-world PPIS. We also compare FUSE to state-of-the-art graph clustering methods with GO term enrichment by constructing the biological process landscape of the PPIS. Using AD network as our case study, we further demonstrate the ability of FUSE to quickly summarize the network and identify many different processes and complexes that regulate it. Finally, we study the higher-order connectivity of the human PPI. By simultaneously evaluating interaction and annotation data, FUSE abstracts higher-order interaction maps by reducing the details of the underlying PPI to form a functional summary graph of interconnected functional clusters. Our results demonstrate its effectiveness and superiority over state-of-the-art graph clustering methods with GO term enrichment.

  4. Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.

    Science.gov (United States)

    Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P

    2015-02-20

    Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.

  5. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  6. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  7. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom

    International Nuclear Information System (INIS)

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One; Ha, Seongmin

    2016-01-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose 4 , levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose 4 levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose 4 level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose 4 obtained at 1.81 mSv. (orig.)

  8. Knowledge-based iterative model reconstruction: comparative image quality and radiation dose with a pediatric computed tomography phantom.

    Science.gov (United States)

    Ryu, Young Jin; Choi, Young Hun; Cheon, Jung-Eun; Ha, Seongmin; Kim, Woo Sun; Kim, In-One

    2016-03-01

    CT of pediatric phantoms can provide useful guidance to the optimization of knowledge-based iterative reconstruction CT. To compare radiation dose and image quality of CT images obtained at different radiation doses reconstructed with knowledge-based iterative reconstruction, hybrid iterative reconstruction and filtered back-projection. We scanned a 5-year anthropomorphic phantom at seven levels of radiation. We then reconstructed CT data with knowledge-based iterative reconstruction (iterative model reconstruction [IMR] levels 1, 2 and 3; Philips Healthcare, Andover, MA), hybrid iterative reconstruction (iDose(4), levels 3 and 7; Philips Healthcare, Andover, MA) and filtered back-projection. The noise, signal-to-noise ratio and contrast-to-noise ratio were calculated. We evaluated low-contrast resolutions and detectability by low-contrast targets and subjective and objective spatial resolutions by the line pairs and wire. With radiation at 100 peak kVp and 100 mAs (3.64 mSv), the relative doses ranged from 5% (0.19 mSv) to 150% (5.46 mSv). Lower noise and higher signal-to-noise, contrast-to-noise and objective spatial resolution were generally achieved in ascending order of filtered back-projection, iDose(4) levels 3 and 7, and IMR levels 1, 2 and 3, at all radiation dose levels. Compared with filtered back-projection at 100% dose, similar noise levels were obtained on IMR level 2 images at 24% dose and iDose(4) level 3 images at 50% dose, respectively. Regarding low-contrast resolution, low-contrast detectability and objective spatial resolution, IMR level 2 images at 24% dose showed comparable image quality with filtered back-projection at 100% dose. Subjective spatial resolution was not greatly affected by reconstruction algorithm. Reduced-dose IMR obtained at 0.92 mSv (24%) showed similar image quality to routine-dose filtered back-projection obtained at 3.64 mSv (100%), and half-dose iDose(4) obtained at 1.81 mSv.

  9. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  10. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  11. Qualitative and quantitative analysis of reconstructed images using projections with noises

    International Nuclear Information System (INIS)

    Lopes, R.T.; Assis, J.T. de

    1988-01-01

    The reconstruction of a two-dimencional image from one-dimensional projections in an analytic algorithm ''convolution method'' is simulated on a microcomputer. In this work it was analysed the effects caused in the reconstructed image in function of the number of projections and noise level added to the projection data. Qualitative and quantitative (distortion and image noise) comparison were done with the original image and the reconstructed images. (author) [pt

  12. Distortion of maximal elevator activity by unilateral premature tooth contact

    DEFF Research Database (Denmark)

    Bakke, Merete; Møller, Eigild

    1980-01-01

    In four subjects the electrical activity in the anterior and posterior temporal and masseter muscles during maximal bite was recorded bilaterally with and without premature unilateral contact. Muscle activity was measured as the average level and the peak of the mean voltage with layers of strips...... of 0.05, 0.10, 0.15 and 2.0 mm, placed between first molars either on the left or the right side, and compared with the level of activity with undistrubed occlusion. Unilateral premature contact caused a significant asymmetry of action in all muscles under study with stronger activity ipsilaterally...

  13. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  14. Charge reconstruction in large-area photomultipliers

    Science.gov (United States)

    Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.

    2018-02-01

    Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].

  15. Probabilistic Controlled Teleportation of a Triplet W State with Combined Channel of Non-Maximally Entangled Einstein–Podolsky–Rosen and Greenberger–Horne–Zeilinger States

    International Nuclear Information System (INIS)

    Jian, Dong; Jian-Fu, Teng

    2009-01-01

    A scheme for probabilistic controlled teleportation of a triplet W state using combined non-maximally entangled channel of two Einstein–Podolsky–Rosen (EPR) states and one Greenberger–Horne–Zeilinger (GHZ) state is proposed. In this scheme, an (m + 2)-qubit GHZ state serves not only as the control parameter but also as the quantum channel. The m control qubits are shared by m supervisors. With the aid of local operations and individual measurements, including Bell-state measurement, Von Neumann measurement, and mutual classical communication etc., Bob can faithfully reconstruct the original state by performing relevant unitary transformations. The total probability of successful teleportation is only dependent on channel coefficients of EPR states and GHZ, independent of the number of supervisor m. This protocol can also be extended to probabilistic controlled teleportation of an arbitrary N-qubit state using combined non-maximally entangled channel of N – 1 EPR states and one (m + 2)-qubit GHZ. (general)

  16. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  17. Maximal muscular vascular conductances during whole body upright exercise in humans

    Science.gov (United States)

    Calbet, J A L; Jensen-Urstad, M; van Hall, G; Holmberg, H -C; Rosdahl, H; Saltin, B

    2004-01-01

    That muscular blood flow may reach 2.5 l kg−1 min−1 in the quadriceps muscle has led to the suggestion that muscular vascular conductance must be restrained during whole body exercise to avoid hypotension. The main aim of this study was to determine the maximal arm and leg muscle vascular conductances (VC) during leg and arm exercise, to find out if the maximal muscular vasodilatory response is restrained during maximal combined arm and leg exercise. Six Swedish elite cross-country skiers, age (mean ± s.e.m.) 24 ± 2 years, height 180 ± 2 cm, weight 74 ± 2 kg, and maximal oxygen uptake (V̇O2,max) 5.1 ± 0.1 l min−1 participated in the study. Femoral and subclavian vein blood flows, intra-arterial blood pressure, cardiac output, as well as blood gases in the femoral and subclavian vein, right atrium and femoral artery were determined during skiing (roller skis) at ∼76% of V̇O2,max and at V̇O2,max with different techniques: diagonal stride (combined arm and leg exercise), double poling (predominantly arm exercise) and leg skiing (predominantly leg exercise). During submaximal exercise cardiac output (26–27 l min−1), mean blood pressure (MAP) (∼87 mmHg), systemic VC, systemic oxygen delivery and pulmonary V̇O2 (∼4 l min−1) attained similar values regardless of exercise mode. The distribution of cardiac output was modified depending on the musculature engaged in the exercise. There was a close relationship between VC and V̇O2 in arms (r = 0.99, P arm VC (63.7 ± 5.6 ml min−1 mmHg−1) was attained during double poling, while peak leg VC was reached at maximal exercise with the diagonal technique (109.8 ± 11.5 ml min−1 mmHg−1) when arm VC was 38.8 ± 5.7 ml min−1 mmHg−1. If during maximal exercise arms and legs had been vasodilated to the observed maximal levels then mean arterial pressure would have dropped at least to 75–77 mmHg in our experimental conditions. It is concluded that skeletal muscle vascular conductance is

  18. CT colonography at low tube potential: using iterative reconstruction to decrease noise

    International Nuclear Information System (INIS)

    Chang, K.J.; Heisler, M.A.; Mahesh, M.; Baird, G.L.; Mayo-Smith, W.W.

    2015-01-01

    Aim: To determine the level of iterative reconstruction required to reduce increased image noise associated with low tube potential computed tomography (CT). Materials and methods: Fifty patients underwent CT colonography with a supine scan at 120 kVp and a prone scan at 100 kVp with other scan parameters unchanged. Both scans were reconstructed with filtered back projection (FBP) and increasing levels of adaptive statistical iterative reconstruction (ASiR) at 30%, 60%, and 90%. Mean noise, soft tissue and tagged fluid attenuation, contrast, and contrast-to-noise ratio (CNR) were collected from reconstructions at both 120 and 100 kVp and compared using a generalised linear mixed model. Results: Decreasing tube potential from 120 to 100 kVp significantly increased image noise by 30–34% and tagged fluid attenuation by 120 HU at all ASiR levels (p<0.0001, all measures). Increasing ASiR from 0% (FBP) to 30%, 60%, and 90% resulted in significant decreases in noise and increases in CNR at both tube potentials (p<0.001, all comparisons). Compared to 120 kVp FBP, ASiR greater than 30% at 100 kVp yielded similar or lower image noise. Conclusions: Iterative reconstruction adequately compensates for increased image noise associated with low tube potential imaging while improving CNR. An ASiR level of approximately 50% at 100 kVp yields similar noise to 120 kVp without ASiR. -- Highlights: •Peak kilovoltage (kVp) can be reduced to decrease radiation dose and increase contrast attenuation at a cost of increased image noise. •Utilizing iterative reconstruction can decrease image noise and increase contrast to noise ratio (CNR) independent of kVp. •Iterative reconstruction adequately compensates for increased image noise associated with low dose low kVp imaging while improving CNR. •An ASiR level of approximately 50% at 100 kVp yields similar noise to 120 kVp without ASiR

  19. The influence of single whole body cryostimulation treatment on the dynamics and the level of maximal anaerobic power.

    Science.gov (United States)

    Klimek, Andrzej T; Lubkowska, Anna; Szyguła, Zbigniew; Frączek, Barbara; Chudecka, Monika

    2011-06-01

    The objective of this work was to determine the dynamics of maximal anaerobic power (MAP) of the lower limbs, following a single whole body cryostimulation treatment (WBC), in relation to the temperature of thigh muscles. The subjects included 15 men and 15 women with an average age (± SD) of 21.6 ± 1.2 years. To evaluate the level of anaerobic power, the Wingate test was applied. The subjects were submitted to 6 WBC treatments at -130°C once a day. After each session they performed a single Wingate test in the 15, 30, 45, 60, 75 and 90th min after leaving the cryogenic chamber. The order of the test was randomized. All Wingate tests were preceded by an evaluation of thigh surface temperature with the use of a thermovisual camera. The average thigh surface temperature (T(av)) in both men and women dropped significantly after the whole body cryostimulation treatment, and next increased gradually. In women T(av) remained decreased for 75 min, whereas in men it did not return to the basal level until 90th min. A statistically insignificant decrease in MAP was observed in women after WBC. On the contrary, a non-significant increase in MAP was observed in men. The course of changes in MAP following the treatment was similar in both sexes to the changes in thigh surface temperature, with the exception of the period between 15th and 30th min. The shorter time to obtain MAP was observed in women till 90th min and in men till 45 min after WBC compared to the initial level. A single whole body cryostimulation may have a minor influence on short-term physical performance of supramaximal intensity, but it leads to improvement of velocity during the start as evidenced by shorter time required to obtain MAP.

  20. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  1. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  2. Geometry reconstruction method for patient-specific finite element models for the assessment of tibia fracture risk in osteogenesis imperfecta.

    Science.gov (United States)

    Caouette, Christiane; Ikin, Nicole; Villemure, Isabelle; Arnoux, Pierre-Jean; Rauch, Frank; Aubin, Carl-Éric

    2017-04-01

    Lower limb deformation in children with osteogenesis imperfecta (OI) impairs ambulation and may lead to fracture. Corrective surgery is based on empirical assessment criteria. The objective was to develop a reconstruction method of the tibia for OI patients that could be used as input of a comprehensive finite element model to assess fracture risks. Data were obtained from three children with OI and tibia deformities. Four pQCT scans were registered to biplanar radiographs, and a template mesh was deformed to fit the bone outline. Cortical bone thickness was computed. Sensitivity of the model to missing slices of pQCT was assessed by calculating maximal von Mises stress for a vertical hopping load case. Sensitivity of the model to ±5 % of cortical thickness measurements was assessed by calculating loads at fracture. Difference between the mesh contour and bone outline on the radiographs was below 1 mm. Removal of one pQCT slice increased maximal von Mises stress by up to 10 %. Simulated ±5 % variation of cortical bone thickness leads to variations of up to 4.1 % on predicted fracture loads. Using clinically available tibia imaging from children with OI, the developed reconstruction method allowed the building of patient-specific finite element models.

  3. Iterative reconstruction or filtered backprojection for semi-quantitative assessment of dopamine D2 receptor SPECT studies?

    International Nuclear Information System (INIS)

    Koch, Walter; Suessmair, Christine; Tatsch, Klaus; Poepperl, Gabriele

    2011-01-01

    In routine clinical practice striatal dopamine D 2 receptor binding is generally assessed using data reconstructed by filtered backprojection (FBP). The aim of this study was to investigate the use of an iterative reconstruction algorithm (ordered subset expectation maximization, OSEM) and to assess whether it may provide comparable or even better results than those obtained by standard FBP. In 56 patients with parkinsonian syndromes, single photon emission computed tomography (SPECT) scans were acquired 2 h after i.v. application of 185 MBq [ 123 I]iodobenzamide (IBZM) using a triple-head gamma camera (Siemens MS 3). The scans were reconstructed both by FBP and OSEM (3 iterations, 8 subsets) and filtered using a Butterworth filter. After attenuation correction the studies were automatically fitted to a mean template with a corresponding 3-D volume of interest (VOI) map covering striatum (S), caudate (C), putamen (P) and several reference VOIs using BRASS software. Visual assessment of the fitted studies suggests a better separation between C and P in studies reconstructed by OSEM than FBP. Unspecific background activity appears more homogeneous after iterative reconstruction. The correlation shows a good accordance of dopamine receptor binding using FBP and OSEM (intra-class correlation coefficients S: 0.87; C: 0.88; P: 0.84). Receiver-operating characteristic (ROC) analyses show comparable diagnostic power of OSEM and FBP in the differentiation between idiopathic parkinsonian syndrome (IPS) and non-IPS. Iterative reconstruction of IBZM SPECT studies for assessment of the D 2 receptors is feasible in routine clinical practice. Close correlations between FBP and OSEM data suggest that iteratively reconstructed IBZM studies allow reliable quantification of dopamine receptor binding even though a gain in diagnostic power could not be demonstrated. (orig.)

  4. The PRISM3D paleoenvironmental reconstruction

    Science.gov (United States)

    Dowsett, H.; Robinson, M.; Haywood, A.M.; Salzmann, U.; Hill, Daniel; Sohl, L.E.; Chandler, M.; Williams, Mark; Foley, K.; Stoll, D.K.

    2010-01-01

    The Pliocene Research, Interpretation and Synoptic Mapping (PRISM) paleoenvironmental reconstruction is an internally consistent and comprehensive global synthesis of a past interval of relatively warm and stable climate. It is regularly used in model studies that aim to better understand Pliocene climate, to improve model performance in future climate scenarios, and to distinguish model-dependent climate effects. The PRISM reconstruction is constantly evolving in order to incorporate additional geographic sites and environmental parameters, and is continuously refined by independent research findings. The new PRISM three dimensional (3D) reconstruction differs from previous PRISM reconstructions in that it includes a subsurface ocean temperature reconstruction, integrates geochemical sea surface temperature proxies to supplement the faunal-based temperature estimates, and uses numerical models for the first time to augment fossil data. Here we describe the components of PRISM3D and describe new findings specific to the new reconstruction. Highlights of the new PRISM3D reconstruction include removal of Hudson Bay and the Great Lakes and creation of open waterways in locations where the current bedrock elevation is less than 25m above modern sea level, due to the removal of the West Antarctic Ice Sheet and the reduction of the East Antarctic Ice Sheet. The mid-Piacenzian oceans were characterized by a reduced east-west temperature gradient in the equatorial Pacific, but PRISM3D data do not imply permanent El Niño conditions. The reduced equator-to-pole temperature gradient that characterized previous PRISM reconstructions is supported by significant displacement of vegetation belts toward the poles, is extended into the Arctic Ocean, and is confirmed by multiple proxies in PRISM3D. Arctic warmth coupled with increased dryness suggests the formation of warm and salty paleo North Atlantic Deep Water (NADW) and a more vigorous thermohaline circulation system that may

  5. A termite symbiotic mushroom maximizing sexual activity at growing tips of vegetative hyphae.

    Science.gov (United States)

    Hsieh, Huei-Mei; Chung, Mei-Chu; Chen, Pao-Yang; Hsu, Fei-Man; Liao, Wen-Wei; Sung, Ai-Ning; Lin, Chun-Ru; Wang, Chung-Ju Rachel; Kao, Yu-Hsin; Fang, Mei-Jane; Lai, Chi-Yung; Huang, Chieh-Chen; Chou, Jyh-Ching; Chou, Wen-Neng; Chang, Bill Chia-Han; Ju, Yu-Ming

    2017-09-19

    Termitomyces mushrooms are mutualistically associated with fungus-growing termites, which are widely considered to cultivate a monogenotypic Termitomyces symbiont within a colony. Termitomyces cultures isolated directly from termite colonies are heterokaryotic, likely through mating between compatible homokaryons. After pairing homokaryons carrying different haplotypes at marker gene loci MIP and RCB from a Termitomyces fruiting body associated with Odontotermes formosanus, we observed nuclear fusion and division, which greatly resembled meiosis, during each hyphal cell division and conidial formation in the resulting heterokaryons. Surprisingly, nuclei in homokaryons also behaved similarly. To confirm if meiotic-like recombination occurred within mycelia, we constructed whole-genome sequencing libraries from mycelia of two homokaryons and a heterokaryon resulting from mating of the two homokaryons. Obtained reads were aligned to the reference genome of Termitomyces sp. J132 for haplotype reconstruction. After removal of the recombinant haplotypes shared between the heterokaryon and either homokaryons, we inferred that 5.04% of the haplotypes from the heterokaryon were the recombinants resulting from homologous recombination distributed genome-wide. With RNA transcripts of four meiosis-specific genes, including SPO11, DMC1, MSH4, and MLH1, detected from a mycelial sample by real-time quantitative PCR, the nuclear behavior in mycelia was reconfirmed meiotic-like. Unlike other basidiomycetes where sex is largely restricted to basidia, Termitomyces maximizes sexuality at somatic stage, resulting in an ever-changing genotype composed of a myriad of coexisting heterogeneous nuclei in a heterokaryon. Somatic meiotic-like recombination may endow Termitomyces with agility to cope with termite consumption by maximized genetic variability.

  6. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  7. How data-ink maximization can motivate learners – Persuasion in data visualization

    OpenAIRE

    Gottschalk, Judith

    2017-01-01

    This paper discusses both the macro- and the micro-level of persuasion in data visualizations in persuasive tools for language learning. The hypothesis of this paper is that persuasive data visualizations decrease reading time and increase reading accuracy of graph charts. Based on Tufte’s (1983) data-ink maximization principle the report introduces a framework for persuasive data visualizations on the persuasive micro level which employs Few’s (2013) conception of de-emphasizing non-data and...

  8. Three-dimensional reconstruction of the giant mimivirus particle with an x-ray free-electron laser.

    Science.gov (United States)

    Ekeberg, Tomas; Svenda, Martin; Abergel, Chantal; Maia, Filipe R N C; Seltzer, Virginie; Claverie, Jean-Michel; Hantke, Max; Jönsson, Olof; Nettelblad, Carl; van der Schot, Gijs; Liang, Mengning; DePonte, Daniel P; Barty, Anton; Seibert, M Marvin; Iwan, Bianca; Andersson, Inger; Loh, N Duane; Martin, Andrew V; Chapman, Henry; Bostedt, Christoph; Bozek, John D; Ferguson, Ken R; Krzywinski, Jacek; Epp, Sascha W; Rolles, Daniel; Rudenko, Artem; Hartmann, Robert; Kimmel, Nils; Hajdu, Janos

    2015-03-06

    We present a proof-of-concept three-dimensional reconstruction of the giant mimivirus particle from experimentally measured diffraction patterns from an x-ray free-electron laser. Three-dimensional imaging requires the assembly of many two-dimensional patterns into an internally consistent Fourier volume. Since each particle is randomly oriented when exposed to the x-ray pulse, relative orientations have to be retrieved from the diffraction data alone. We achieve this with a modified version of the expand, maximize and compress algorithm and validate our result using new methods.

  9. Optimization-based reconstruction for reduction of CBCT artifact in IGRT

    Science.gov (United States)

    Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan

    2016-04-01

    Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.

  10. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms

  11. Breast reconstruction after mastectomy

    Directory of Open Access Journals (Sweden)

    Daniel eSchmauss

    2016-01-01

    Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.

  12. Breast reconstruction - implants

    Science.gov (United States)

    Breast implants surgery; Mastectomy - breast reconstruction with implants; Breast cancer - breast reconstruction with implants ... harder to find a tumor if your breast cancer comes back. Getting breast implants does not take as long as breast reconstruction ...

  13. Anthropometric body measurements based on multi-view stereo image reconstruction.

    Science.gov (United States)

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  14. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  15. [Application of Fourier transform profilometry in 3D-surface reconstruction].

    Science.gov (United States)

    Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing

    2011-08-01

    With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.

  16. Appropriate slice location to assess maximal cross-sectional area of individual rotator cuff muscles in normal adults and athletes

    International Nuclear Information System (INIS)

    Yanagisawa, Osamu; Dohi, Michiko; Okuwaki, Toru; Tawara, Noriyuki; Takahashi, Hideyuki; Niitsu, Mamoru

    2009-01-01

    We investigated appropriate slice locations for magnetic resonance (MR) imaging evaluation of the maximal cross-sectional area (CSA) of individual rotator cuff (RC) muscles in normal adults and athletes. We used a 1.5-tesla MR system with body-array and spine coils to obtain oblique sagittal T 1 -weighted shoulder images of 29 normal adults (16 men, 13 women); 6 national-level competitive swimmers (4 men, 2 women); 10 collegiate-level female badminton players; and 7 collegiate-level male rowers. We calculated the supraspinatus, infraspinatus, teres minor, and subscapularis CSAs at the 0-1 locations on the scapula (dividing scapula width into 11 locations), 0 representing the medial border of the scapula and 1, the glenoid fossa surface. We evaluated the differences in CSAs at relative locations on the scapula for each muscle in normal adults, swimmers, badminton players, and rowers using a one-way analysis of variance followed by the Tukey test (P<0.05). The supraspinatus CSAs were maximal at 0.7 for all groups. The infraspinatus CSAs were maximal at 0.5 for normal men and women and badminton players, 0.4- and 0.5 locations for swimmers, and 0.4 for rowers. The teres minor CSAs were maximal at 0.9 for all groups except the swimmers (1 location). The subscapularis CSAs were maximal at 0.7 in men, swimmers, and badminton players and 0.6 in women and rowers. The appropriate slice locations for evaluating maximal CSAs are slightly lateral to the center of the scapula for the supraspinatus and subscapularis, at approximately the center of the scapula for the infraspinatus, and near the glenoid fossa for the teres minor. These slice locations should be clinically useful for morphological and/or function-related assessments of shoulder RC muscles. (author)

  17. Smoothing expansion rate data to reconstruct cosmological matter perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, J.E.; Alcaniz, J.S.; Carvalho, J.C., E-mail: javierernesto@on.br, E-mail: alcaniz@on.br, E-mail: jcarvalho@on.br [Departamento de Astronomia, Observatório Nacional, Rua Gal. José Cristino, 77, Rio de Janeiro, RJ 20921-400 (Brazil)

    2017-08-01

    The existing degeneracy between different dark energy and modified gravity cosmologies at the background level may be broken by analyzing quantities at the perturbative level. In this work, we apply a non-parametric smoothing (NPS) method to reconstruct the expansion history of the Universe ( H ( z )) from model-independent cosmic chronometers and high- z quasar data. Assuming a homogeneous and isotropic flat universe and general relativity (GR) as the gravity theory, we calculate the non-relativistic matter perturbations in the linear regime using the H ( z ) reconstruction and realistic values of Ω {sub m} {sub 0} and σ{sub 8} from Planck and WMAP-9 collaborations. We find a good agreement between the measurements of the growth rate and f σ{sub 8}( z ) from current large-scale structure observations and the estimates obtained from the reconstruction of the cosmic expansion history. Considering a recently proposed null test for GR using matter perturbations, we also apply the NPS method to reconstruct f σ{sub 8}( z ). For this case, we find a ∼ 3σ tension (good agreement) with the standard relativistic cosmology when the Planck (WMAP-9) priors are used.

  18. Smoothing expansion rate data to reconstruct cosmological matter perturbations

    International Nuclear Information System (INIS)

    Gonzalez, J.E.; Alcaniz, J.S.; Carvalho, J.C.

    2017-01-01

    The existing degeneracy between different dark energy and modified gravity cosmologies at the background level may be broken by analyzing quantities at the perturbative level. In this work, we apply a non-parametric smoothing (NPS) method to reconstruct the expansion history of the Universe ( H ( z )) from model-independent cosmic chronometers and high- z quasar data. Assuming a homogeneous and isotropic flat universe and general relativity (GR) as the gravity theory, we calculate the non-relativistic matter perturbations in the linear regime using the H ( z ) reconstruction and realistic values of Ω m 0 and σ 8 from Planck and WMAP-9 collaborations. We find a good agreement between the measurements of the growth rate and f σ 8 ( z ) from current large-scale structure observations and the estimates obtained from the reconstruction of the cosmic expansion history. Considering a recently proposed null test for GR using matter perturbations, we also apply the NPS method to reconstruct f σ 8 ( z ). For this case, we find a ∼ 3σ tension (good agreement) with the standard relativistic cosmology when the Planck (WMAP-9) priors are used.

  19. On Maximal Non-Disjoint Families of Subsets

    Directory of Open Access Journals (Sweden)

    Yu. A. Zuev

    2017-01-01

    Full Text Available The paper studies maximal non-disjoint families of subsets of a finite set. Non-disjointness means that any two subsets of a family have a nonempty intersection. The maximality is expressed by the fact that adding a new subset to the family cannot increase its power without violating a non-disjointness condition. Studying the properties of such families is an important section of the extreme theory of sets. Along with purely combinatorial interest, the problems considered here play an important role in informatics, anti-noise coding, and cryptography.In 1961 this problem saw the light of day in the Erdos, Ko and Rado paper, which established a maximum power of the non-disjoint family of subsets of equal power. In 1974 the Erdos and Claytman publication estimated the number of maximal non-disjoint families of subsets without involving the equality of their power. These authors failed to establish an asymptotics of the logarithm of the number of such families when the power of a basic finite set tends to infinity. However, they suggested such an asymptotics as a hypothesis. A.D. Korshunov in two publications in 2003 and 2005 established the asymptotics for the number of non-disjoint families of the subsets of arbitrary powers without maximality condition of these families.The basis for the approach used in the paper to study the families of subsets is their description in the language of Boolean functions. A one-to-one correspondence between a family of subsets and a Boolean function is established by the fact that the characteristic vectors of subsets of a family are considered to be the unit sets of a Boolean function. The main theoretical result of the paper is that the maximal non-disjoint families are in one-to-one correspondence with the monotonic self-dual Boolean functions. When estimating the number of maximal non-disjoint families, this allowed us to use the result of A.A. Sapozhenko, who established the asymptotics of the number of the

  20. Few-view image reconstruction with dual dictionaries

    International Nuclear Information System (INIS)

    Lu Yang; Zhao Jun; Wang Ge

    2012-01-01

    In this paper, we formulate the problem of computed tomography (CT) under sparsity and few-view constraints, and propose a novel algorithm for image reconstruction from few-view data utilizing the simultaneous algebraic reconstruction technique (SART) coupled with dictionary learning, sparse representation and total variation (TV) minimization on two interconnected levels. The main feature of our algorithm is the use of two dictionaries: a transitional dictionary for atom matching and a global dictionary for image updating. The atoms in the global and transitional dictionaries represent the image patches from high-quality and low-quality CT images, respectively. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The results reconstructed using the proposed approach are significantly better than those using either SART or SART–TV. (paper)

  1. Inquiry in bibliography some of the bustan`s maxim

    Directory of Open Access Journals (Sweden)

    sajjad rahmatian

    2016-12-01

    Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.

  2. Can monkeys make investments based on maximized pay-off?

    Directory of Open Access Journals (Sweden)

    Sophie Steelandt

    2011-03-01

    Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.

  3. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    International Nuclear Information System (INIS)

    Barat, Eric; Dautremer, Thomas

    2007-01-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on R k (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM

  4. Effects of Piecewise Spatial Smoothing in 4-D SPECT Reconstruction

    Science.gov (United States)

    Qi, Wenyuan; Yang, Yongyi; King, Michael A.

    2014-02-01

    In nuclear medicine, cardiac gated SPECT images are known to suffer from significantly increased noise owing to limited data counts. Consequently, spatial (and temporal) smoothing has been indispensable for suppressing the noise artifacts in SPECT reconstruction. However, recently we demonstrated that the benefit of spatial processing in motion-compensated reconstruction of gated SPECT (aka 4-D) could be outweighed by its adverse effects on the myocardium, which included degraded wall motion and perfusion defect detectability. In this work, we investigate whether we can alleviate these adverse effects by exploiting an alternative spatial smoothing prior in 4-D based on image total variation (TV). TV based prior is known to induce piecewise smoothing which can preserve edge features (such as boundaries of the heart wall) in reconstruction. However, it is not clear whether such a property would necessarily be beneficial for improving the accuracy of the myocardium in 4-D reconstruction. In particular, it is unknown whether it would adversely affect the detectability of perfusion defects that are small in size or low in contrast. In our evaluation study, we first use Monte Carlo simulated imaging with 4-D NURBS-based cardiac-torso (NCAT) phantom wherein the ground truth is known for quantitative comparison. We evaluated the accuracy of the reconstructed myocardium using a number of metrics, including regional and overall accuracy of the myocardium, accuracy of the phase activity curve (PAC) of the LV wall for wall motion, uniformity and spatial resolution of the LV wall, and detectability of perfusion defects using a channelized Hotelling observer (CHO). For lesion detection, we simulated perfusion defects with different sizes and contrast levels with the focus being on perfusion defects that are subtle. As a preliminary demonstration, we also tested on three sets of clinical acquisitions. From the quantitative results, it was demonstrated that TV smoothing could

  5. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  6. Influence of the Sampling Rate and Noise Characteristics on Prediction of the Maximal Safe Laser Exposure in Human Skin Using Pulsed Photothermal Radiometry

    Science.gov (United States)

    Vidovič, L.; Milanič, M.; Majaron, B.

    2013-09-01

    Pulsed photothermal radiometry (PPTR) allows for noninvasive determination of the laser-induced temperature depth profile in strongly scattering samples, including human skin. In a recent experimental study, we have demonstrated that such information can be used to derive rather accurate predictions of the maximal safe radiant exposure on an individual patient basis. This has important implications for efficacy and safety of several laser applications in dermatology and aesthetic surgery, which are often compromised by risk of adverse side effects (e.g., scarring, and dyspigmentation) resulting from nonselective absorption of strong laser light in epidermal melanin. In this study, the differences between the individual maximal safe radiant exposure values as predicted from PPTR temperature depth profiling performed using a commercial mid-IR thermal camera (as used to acquire the original patient data) and our customized PPTR setup are analyzed. To this end, the latter has been used to acquire 17 PPTR records from three healthy volunteers, using 1 ms laser irradiation at 532 nm and a signal sampling rate of 20 000 . The laser-induced temperature profiles are reconstructed first from the intact PPTR signals, and then by binning the data to imitate the lower sampling rate of the IR camera (1000 fps). Using either the initial temperature profile in a dedicated numerical model of heat transfer or protein denaturation dynamics, the predicted levels of epidermal thermal damage and the corresponding are compared. A similar analysis is performed also with regard to the differences between noise characteristics of the two PPTR setups.

  7. Disk Density Tuning of a Maximal Random Packing.

    Science.gov (United States)

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  8. Adaptive algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Lu Wenkai; Yin Fangfang

    2004-01-01

    Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency

  9. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  10. Region of interest evaluation of SPECT image reconstruction methods using a realistic brain phantom

    International Nuclear Information System (INIS)

    Xia, Weishi; Glick, S.J.; Soares, E.J.

    1996-01-01

    A realistic numerical brain phantom, developed by Zubal et al, was used for a region-of-interest evaluation of the accuracy and noise variance of the following SPECT reconstruction methods: (1) Maximum-Likelihood reconstruction using the Expectation-Maximization (ML-EM) algorithm; (2) an EM algorithm using ordered-subsets (OS-EM); (3) a re-scaled block iterative EM algorithm (RBI-EM); and (4) a filtered backprojection algorithm that uses a combination of the Bellini method for attenuation compensation and an iterative spatial blurring correction method using the frequency-distance principle (FDP). The Zubal phantom was made from segmented MRI slices of the brain, so that neuro-anatomical structures are well defined and indexed. Small regions-of-interest (ROIs) from the white matter, grey matter in the center of the brain and grey matter from the peripheral area of the brain were selected for the evaluation. Photon attenuation and distance-dependent collimator blurring were modeled. Multiple independent noise realizations were generated for two different count levels. The simulation study showed that the ROI bias measured for the EM-based algorithms decreased as the iteration number increased, and that the OS-EM and RBI-EM algorithms (16 and 64 subsets were used) achieved the equivalent accuracy of the ML-EM algorithm at about the same noise variance, with much fewer number of iterations. The Bellini-FDP restoration algorithm converged fast and required less computation per iteration. The ML-EM algorithm had a slightly better ROI bias vs. variance trade-off than the other algorithms

  11. FUSE: a profit maximization approach for functional summarization of biological networks

    Directory of Open Access Journals (Sweden)

    Seah Boon-Siew

    2012-03-01

    Full Text Available Abstract Background The availability of large-scale curated protein interaction datasets has given rise to the opportunity to investigate higher level organization and modularity within the protein interaction network (PPI using graph theoretic analysis. Despite the recent progress, systems level analysis of PPIS remains a daunting task as it is challenging to make sense out of the deluge of high-dimensional interaction data. Specifically, techniques that automatically abstract and summarize PPIS at multiple resolutions to provide high level views of its functional landscape are still lacking. We present a novel data-driven and generic algorithm called FUSE (Functional Summary Generator that generates functional maps of a PPI at different levels of organization, from broad process-process level interactions to in-depth complex-complex level interactions, through a pro t maximization approach that exploits Minimum Description Length (MDL principle to maximize information gain of the summary graph while satisfying the level of detail constraint. Results We evaluate the performance of FUSE on several real-world PPIS. We also compare FUSE to state-of-the-art graph clustering methods with GO term enrichment by constructing the biological process landscape of the PPIS. Using AD network as our case study, we further demonstrate the ability of FUSE to quickly summarize the network and identify many different processes and complexes that regulate it. Finally, we study the higher-order connectivity of the human PPI. Conclusion By simultaneously evaluating interaction and annotation data, FUSE abstracts higher-order interaction maps by reducing the details of the underlying PPI to form a functional summary graph of interconnected functional clusters. Our results demonstrate its effectiveness and superiority over state-of-the-art graph clustering methods with GO term enrichment.

  12. Maximal exercise and muscle oxygen extraction in acclimatizing lowlanders and high altitude natives

    DEFF Research Database (Denmark)

    Lundby, Carsten; Sander, Mikael; van Hall, Gerrit

    2006-01-01

    , and is the focus of the present study. We have studied six lowlanders during maximal exercise at sea level (SL) and with acute (AH) exposure to 4,100 m altitude, and again after 2 (W2) and 8 weeks (W8) of altitude sojourn, where also eight high altitude native (Nat) Aymaras were studied. Fractional arterial muscle...... O(2) extraction at maximal exercise was 90.0+/-1.0% in the Danish lowlanders at sea level, and remained close to this value in all situations. In contrast to this, fractional arterial O(2) extraction was 83.2+/-2.8% in the high altitude natives, and did not change with the induction of normoxia....... The capillary oxygen conductance of the lower extremity, a measure of oxygen diffusing capacity, was decreased in the Danish lowlanders after 8 weeks of acclimatization, but was still higher than the value obtained from the high altitude natives. The values were (in ml min(-1) mmHg(-1)) 55.2+/-3.7 (SL), 48...

  13. Maximum entropy reconstruction of poloidal magnetic field and radial electric field profiles in tokamaks

    Science.gov (United States)

    Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang

    2017-10-01

    The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.

  14. Quantitative reconstruction of PIXE-tomography data for thin samples using GUPIX X-ray emission yields

    Energy Technology Data Exchange (ETDEWEB)

    Michelet, C., E-mail: michelet@cenbg.in2p3.fr [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Barberet, Ph., E-mail: barberet@cenbg.in2p3.fr [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Devès, G., E-mail: deves@cenbg.in2p3.fr [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bouguelmouna, B., E-mail: bbouguel@gmail.com [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bourret, S., E-mail: bourret@cenbg.in2p3.fr [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Delville, M.-H., E-mail: delville@icmcb-bordeaux.cnrs.fr [Institut de Chimie de la Matière Condensée de Bordeaux (ICMCB, UPR9048) CNRS, Université de Bordeaux, 87 avenue du Dr. A. Schweitzer, Pessac F-33608 (France); Le Trequesser, Q., E-mail: letreque@cenbg.in2p3.fr [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Institut de Chimie de la Matière Condensée de Bordeaux (ICMCB, UPR9048) CNRS, Université de Bordeaux, 87 avenue du Dr. A. Schweitzer, Pessac F-33608 (France); Gordillo, N., E-mail: nuri.gordillo@gmail.com [Univ. Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Beasley, D.G., E-mail: d.beasley@ucl.ac.uk [Center of Medical Imaging Computing (CMIC), Department of Medical Physics & Bioengineering, University College London, Gower Street, London WC1E 6BT (United Kingdom); and others

    2015-04-01

    We present here a new development of the TomoRebuild software package, to perform quantitative Particle Induced X-ray Emission Tomography (PIXET) reconstruction. X-ray yields are obtained from the GUPIX code. The GUPIX data base is available for protons up to 5 MeV and also in the 20–100 MeV energy range, deuterons up to 6 MeV, {sup 3}He and alphas up to 12 MeV. In this version, X-ray yields are calculated for thin samples, i.e. without simulating X-ray attenuation. PIXET data reconstruction is kept as long as possible independent from Scanning Transmission Ion Microscopy Tomography (STIMT). In this way, the local mass distribution (in g/cm{sup 3}) of each X-ray emitting element is reconstructed in all voxels of the analyzed volume, only from PIXET data, without the need of associated STIMT data. Only the very last step of data analysis requires STIMT data, in order to normalize PIXET data to obtain concentration distributions, in terms of normalized mass fractions (in μg/g). For this, a noise correction procedure has been designed in ImageJ. Moreover sinogram or image misalignment can be corrected, as well as the difference in beam size between the two experiments. The main features of the TomoRebuild code, user friendly design and modular C++ implementation, were kept. The software package is portable and can run on Windows and Linux operating systems. An optional user-friendly graphic interface was designed in Java, as a plugin for the ImageJ graphic software package. Reconstruction examples are presented from biological specimens of Caenorhabditis elegans – a small nematode constituting a reference model for biology studies. The reconstruction results are compared between the different codes TomoRebuild, DISRA and JPIXET, and different reconstruction methods: Filtered BackProjection (FBP) and Maximum Likelihood Expectation Maximization (MLEM)

  15. A Pseudoproxy-Ensemble Study of Late-Holocene Climate Field Reconstructions Using CCA

    Science.gov (United States)

    Amrhein, D. E.; Smerdon, J. E.

    2009-12-01

    Recent evaluations of late-Holocene multi-proxy reconstruction methods have used pseudoproxy experiments derived from millennial General Circulation Model (GCM) integrations. These experiments assess the performance of a reconstruction technique by comparing pseudoproxy reconstructions, which use restricted subsets of model data, against complete GCM data fields. Most previous studies have tested methodologies using different pseudoproxy noise levels, but only with single realizations for each noise classification. A more robust evaluation of performance is to create an ensemble of pseudoproxy networks with distinct sets of noise realizations and a corresponding reconstruction ensemble that can be evaluated for consistency and sensitivity to random error. This work investigates canonical correlation analysis (CCA) as a late-Holocene climate field reconstruction (CFR) technique using ensembles of pseudoproxy experiments derived from the NCAR CSM 1.4 millennial integration. Three 200-member reconstruction ensembles are computed using pseudoproxies with signal-to-noise ratios (by standard deviation) of 1, 0.5, and 0.25 and locations that approximate the spatial distribution of real-world multiproxy networks. An important component of these ensemble calculations is the independent optimization of the three CCA truncation parameters for each ensemble member. This task is accomplished using an inexpensive discrete optimization algorithm that minimizes both RMS error in the calibration interval and the number of free parameters in the reconstruction model to avoid artificial skill. Within this framework, CCA is investigated for its sensitivity to the level of noise in the pseudoproxy network and the spatial distribution of the network. Warm biases, variance losses, and validation-interval error increase with noise level and vary spatially within the reconstructed fields. Reconstruction skill, measured as grid-point correlations during the validation interval, is lowest in

  16. Kuwaiti reconstruction project unprecedented in size, complexity

    Energy Technology Data Exchange (ETDEWEB)

    Tippee, B.

    1993-03-15

    There had been no challenge like it: a desert emirate ablaze; its main city sacked; the economically crucial oil industry devastated; countryside shrouded in smoke from oil well fires and littered with unexploded ordnance, disabled military equipment, and unignited crude oil. Like the well-documented effort that brought 749 burning wells under control in less than 7 months, Kuwaiti reconstruction had no precedent. Unlike the firefight, reconstruction is no-where complete. It nevertheless has placed two of three refineries back on stream, restored oil production to preinvasion levels, and repaired or rebuilt 17 of 26 oil field gathering stations. Most of the progress has come since the last well fire went out on Nov. 6, 1991. Expatriates in Kuwait since the days of Al-Awda- the return,' in Arabic- attribute much of the rapid progress under Al-Tameer- the reconstruction'- to decisions and preparations made while the well fires still raged. The article describes the planning for Al-Awda, reentering the country, drilling plans, facilities reconstruction, and special problems.

  17. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  18. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  19. The use of Yo-Yo intermittent recovery level 1 and Andersen testing for fitness and maximal heart rate assessments of 6- to 10-year-old school children

    DEFF Research Database (Denmark)

    Bendiksen, Mads; Ahler, Thomas; Clausen, Helle

    2013-01-01

    ABSTRACT: We evaluated a sub-maximal and maximal version of the Yo-Yo IR1 childrens test (YYIR1C) and the Andersen test for fitness and maximal HR assessments of children aged 6-10. Two repetitions of the YYIR1C and Andersen tests were carried out within one week by 6-7 and 8-9 year olds (grade 0...

  20. An evolutionary algorithm for tomographic reconstructions in limited data sets problems

    International Nuclear Information System (INIS)

    Turcanu, Catrinel; Craciunescu, Teddy

    2000-01-01

    The paper proposes a new method for tomographic reconstructions. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited angle views. The problem of image reconstruction from projections may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by an evolutionary algorithm. Our algorithm has some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated in comparison with a traditional tomographic method, based on the maximization of the entropy of the image, that proved to work well with limited data sets. The test phantom is typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise

  1. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  2. Dopaminergic balance between reward maximization and policy complexity

    Directory of Open Access Journals (Sweden)

    Naama eParush

    2011-05-01

    Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.

  3. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  4. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization

    International Nuclear Information System (INIS)

    Sidky, Emil Y; Pan Xiaochuan

    2008-01-01

    An iterative algorithm, based on recent work in compressive sensing, is developed for volume image reconstruction from a circular cone-beam scan. The algorithm minimizes the total variation (TV) of the image subject to the constraint that the estimated projection data is within a specified tolerance of the available data and that the values of the volume image are non-negative. The constraints are enforced by the use of projection onto convex sets (POCS) and the TV objective is minimized by steepest descent with an adaptive step-size. The algorithm is referred to as adaptive-steepest-descent-POCS (ASD-POCS). It appears to be robust against cone-beam artifacts, and may be particularly useful when the angular range is limited or when the angular sampling rate is low. The ASD-POCS algorithm is tested with the Defrise disk and jaw computerized phantoms. Some comparisons are performed with the POCS and expectation-maximization (EM) algorithms. Although the algorithm is presented in the context of circular cone-beam image reconstruction, it can also be applied to scanning geometries involving other x-ray source trajectories

  5. SPECT reconstruction of combined cone beam and parallel hole collimation with experimental data

    International Nuclear Information System (INIS)

    Li, Jianying; Jaszczak, R.J.; Turkington, T.G.; Greer, K.L.; Coleman, R.E.

    1993-01-01

    The authors have developed three methods to combine parallel and cone bean (P and CB) SPECT data using modified Maximum Likelihood-Expectation Maximization (ML-EM) algorithms. The first combination method applies both parallel and cone beam data sets to reconstruct a single intermediate image after each iteration using the ML-EM algorithm. The other two iterative methods combine the intermediate parallel beam (PB) and cone beam (CB) source estimates to enhance the uniformity of images. These two methods are ad hoc methods. In earlier studies using computer Monte Carlo simulation, they suggested that improved images might be obtained by reconstructing combined P and CB SPECT data. These combined collimation methods are qualitatively evaluated using experimental data. An attenuation compensation is performed by including the effects of attenuation in the transition matrix as a multiplicative factor. The combined P and CB images are compared with CB-only images and the result indicate that the combined P and CB approaches suppress artifacts caused by truncated projections and correct for the distortions of the CB-only images

  6. Maximal sustained levels of energy expenditure in humans during exercise.

    Science.gov (United States)

    Cooper, Jamie A; Nguyen, David D; Ruby, Brent C; Schoeller, Dale A

    2011-12-01

    Migrating birds have been able to sustain an energy expenditure (EE) that is five times their basal metabolic rate. Although humans can readily reach these levels, it is not yet clear what levels can be sustained for several days. The study's purposes were 1) to determine the upper limits of human EE and whether or not those levels can be sustained without inducing catabolism of body tissues and 2) to determine whether initial body weight is related to the levels that can be sustained. We compiled data on documented EE as measured by doubly labeled water during high levels of physical activity (minimum of five consecutive days). We calculated the physical activity level (PAL) of each individual studied (PAL = total EE / basal metabolic rate) from the published data. Correlations were run to examine the relationship between initial body weight and body weight lost with both total EE and PAL. The uppermost limit of EE was a peak PAL of 6.94 that was sustained for 10 consecutive days of a 95-d race. Only two studies reported PALs above 5.0; however, significant decreases in body mass were found in each study (0.45-1.39 kg·wk(-1) of weight loss). To test whether initial weight affects the ability to sustain high PALs, we found a significant positive correlation between TEE and initial body weight (r = 0.46, P body weight (r = 0.27, not statistically significant). Some elite humans are able to sustain PALs above 5.0 for a minimum of 10 d. Although significant decreases in body weight occur at this level, catabolism of body tissue may be preventable in situations with proper energy intake. Further, initial body weight does not seem to affect the sustainability of PALs.

  7. El culto de Maximón en Guatemala

    OpenAIRE

    Pédron‑Colombani, Sylvie

    2009-01-01

    Este artículo se enfoca en la figura de Maximón, deidad sincrética de Guatemala, en un contexto de desplazamiento de la religión católica popular por parte de las iglesias protestantes. Esta divinidad híbrida a la cual se agregan santos católicos como Judas Iscariote o el dios maya Mam, permite la apropiación de Maximón por segmentos diferenciados de la población (tanto indígena como mestiza). Permite igualmente ser símbolo de protestas sociales enmascaradas cuando se asocia Maximón con figur...

  8. Does posteromedial chondromalacia reduce rate of return to play after ulnar collateral ligament reconstruction?

    Science.gov (United States)

    Osbahr, Daryl C; Dines, Joshua S; Rosenbaum, Andrew J; Nguyen, Joseph T; Altchek, David W

    2012-06-01

    Biomechanical studies suggest ulnohumeral chondral and ligamentous overload (UCLO) explains the development of posteromedial chondromalacia (PMC) in throwing athletes with ulnar collateral ligament (UCL) insufficiency. UCL reconstruction reportedly allows 90% of baseball players to return to prior or a higher level of play; however, players with concomitant posteromedial chondromalacia may experience lower rates of return to play. The purpose of this investigation is to determine: (1) the rates of return to play of baseball players undergoing UCL reconstruction and posteromedial chondromalacia; and (2) the complications occurring after UCL reconstruction in the setting of posteromedial chondromalacia. We retrospectively reviewed 29 of 161 (18%) baseball players who were treated for the combined posteromedial chondromalacia and UCL injury. UCL reconstruction was accomplished with the docking technique, and the PMC was addressed with nothing or débridement if Grade 2 or 3 and with débridement or microfracture if Grade 4. The mean age was 19.6 years (range, 16-23 years). Most players were college athletes (76%) and pitchers (93%). We used a modified four-level scale of Conway et al. to assess return to play with 1 being the highest level (return to preinjury level of competition or performance for at least one season after UCL reconstruction). The minimum followup was 24 months (mean, 37 months; range, 24-52 months). Return to play was Level 1 in 22 patients (76%), Level 2 in four patients (14%), Level 3 in two patients (7%), and Level 4 in one (3%) patient. Our data suggest baseball players with concomitant PMC, may have lower rates of return to the same or a higher level of play compared with historical controls. Level IV, case series. See Guidelines for Authors for a complete description of levels of evidence.

  9. Self-consistent collective-coordinate method for ''maximally-decoupled'' collective subspace and its boson mapping: Quantum theory of ''maximally-decoupled'' collective motion

    International Nuclear Information System (INIS)

    Marumori, T.; Sakata, F.; Maskawa, T.; Une, T.; Hashimoto, Y.

    1983-01-01

    The main purpose of this paper is to develop a full quantum theory, which is capable by itself of determining a ''maximally-decoupled'' collective motion. The paper is divided into two parts. In the first part, the motivation and basic idea of the theory are explained, and the ''maximal-decoupling condition'' on the collective motion is formulated within the framework of the time-dependent Hartree-Fock theory, in a general form called the invariance principle of the (time-dependent) Schrodinger equation. In the second part, it is shown that when the author positively utilize the invariance principle, we can construct a full quantum theory of the ''maximally-decoupled'' collective motion. This quantum theory is shown to be a generalization of the kinematical boson-mapping theories so far developed, in such a way that the dynamical ''maximal-decoupling condition'' on the collective motion is automatically satisfied

  10. Optimal Operation of Network-Connected Combined Heat and Powers for Customer Profit Maximization

    Directory of Open Access Journals (Sweden)

    Da Xie

    2016-06-01

    Full Text Available Network-connected combined heat and powers (CHPs, owned by a community, can export surplus heat and electricity to corresponding heat and electric networks after community loads are satisfied. This paper proposes a new optimization model for network-connected CHP operation. Both CHPs’ overall efficiency and heat to electricity ratio (HTER are assumed to vary with loading levels. Based on different energy flow scenarios where heat and electricity are exported to the network from the community or imported, four profit models are established accordingly. They reflect the different relationships between CHP energy supply and community load demand across time. A discrete optimization model is then developed to maximize the profit for the community. The models are derived from the intervals determined by the daily operation modes of CHP and real-time buying and selling prices of heat, electricity and natural gas. By demonstrating the proposed models on a 1 MW network-connected CHP, results show that the community profits are maximized in energy markets. Thus, the proposed optimization approach can help customers to devise optimal CHP operating strategies for maximizing benefits.

  11. Reconstruction of thermally quenched glow curves in quartz

    International Nuclear Information System (INIS)

    Subedi, Bhagawan; Polymeris, George S.; Tsirliganis, Nestor C.; Pagonis, Vasilis; Kitis, George

    2012-01-01

    The experimentally measured thermoluminescence (TL) glow curves of quartz samples are influenced by the presence of the thermal quenching effect, which involves a variation of the luminescence efficiency as a function of temperature. The real shape of the thermally unquenched TL glow curves is completely unknown. In the present work an attempt is made to reconstruct these unquenched glow curves from the quenched experimental data, and for two different types of quartz samples. The reconstruction is based on the values of the thermal quenching parameter W (activation energy) and C (a dimensionless constant), which are known from recent experimental work on these two samples. A computerized glow-curve deconvolution (CGCD) analysis was performed twice for both the reconstructed and the experimental TL glow curves. Special attention was paid to check for consistency between the results of these two independent CGCD analyses. The investigation showed that the reconstruction attempt was successful, and it is concluded that the analysis of reconstructed TL glow curves can provide improved values of the kinetic parameters E, s for the glow peaks of quartz. This also leads to a better evaluation of the half-lives of electron trapping levels used for dosimetry and luminescence dating.

  12. The adaptive statistical iterative reconstruction-V technique for radiation dose reduction in abdominal CT: comparison with the adaptive statistical iterative reconstruction technique.

    Science.gov (United States)

    Kwon, Heejin; Cho, Jinhan; Oh, Jongyeong; Kim, Dongwon; Cho, Junghyun; Kim, Sanghyun; Lee, Sangyun; Lee, Jihyun

    2015-10-01

    To investigate whether reduced radiation dose abdominal CT images reconstructed with adaptive statistical iterative reconstruction V (ASIR-V) compromise the depiction of clinically competent features when compared with the currently used routine radiation dose CT images reconstructed with ASIR. 27 consecutive patients (mean body mass index: 23.55 kg m(-2) underwent CT of the abdomen at two time points. At the first time point, abdominal CT was scanned at 21.45 noise index levels of automatic current modulation at 120 kV. Images were reconstructed with 40% ASIR, the routine protocol of Dong-A University Hospital. At the second time point, follow-up scans were performed at 30 noise index levels. Images were reconstructed with filtered back projection (FBP), 40% ASIR, 30% ASIR-V, 50% ASIR-V and 70% ASIR-V for the reduced radiation dose. Both quantitative and qualitative analyses of image quality were conducted. The CT dose index was also recorded. At the follow-up study, the mean dose reduction relative to the currently used common radiation dose was 35.37% (range: 19-49%). The overall subjective image quality and diagnostic acceptability of the 50% ASIR-V scores at the reduced radiation dose were nearly identical to those recorded when using the initial routine-dose CT with 40% ASIR. Subjective ratings of the qualitative analysis revealed that of all reduced radiation dose CT series reconstructed, 30% ASIR-V and 50% ASIR-V were associated with higher image quality with lower noise and artefacts as well as good sharpness when compared with 40% ASIR and FBP. However, the sharpness score at 70% ASIR-V was considered to be worse than that at 40% ASIR. Objective image noise for 50% ASIR-V was 34.24% and 46.34% which was lower than 40% ASIR and FBP. Abdominal CT images reconstructed with ASIR-V facilitate radiation dose reductions of to 35% when compared with the ASIR. This study represents the first clinical research experiment to use ASIR-V, the newest version of

  13. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyu [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 and Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Barnhart, Huiman [Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina 27705 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 and Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Robins, Marthony [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Colsher, James [Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Samei, Ehsan [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Department of Physics, Department of Biomedical Engineering, and Department of Electronic and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2013-11-15

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  14. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    International Nuclear Information System (INIS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-01-01

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  15. Athletic Performance at the National Basketball Association Combine After Anterior Cruciate Ligament Reconstruction.

    Science.gov (United States)

    Mehran, Nima; Williams, Phillip N; Keller, Robert A; Khalil, Lafi S; Lombardo, Stephen J; Kharrazi, F Daniel

    2016-05-01

    Anterior cruciate ligament (ACL) injuries are significant injuries in elite-level basketball players. In-game statistical performance after ACL reconstruction has been demonstrated; however, few studies have reviewed functional performance in National Basketball Association (NBA)-caliber athletes after ACL reconstruction. To compare NBA Combine performance of athletes after ACL reconstruction with an age-, size-, and position-matched control group of players with no previous reported knee injury requiring surgery. We hypothesized that there is no difference between the 2 groups in functional performance. Cross-sectional study; Level of evidence, 3. A total of 1092 NBA-caliber players who participated in the NBA Combine between 2000 and 2015 were reviewed. Twenty-one athletes were identified as having primary ACL reconstruction prior to participation in the combine. This study group was compared with an age-, size-, and position-matched control group in objective functional performance testing, including the shuttle run test, lane agility test, three-quarter court sprint, vertical jump (no step), and maximum vertical jump (running start). With regard to quickness and agility, both ACL-reconstructed athletes and controls scored an average of 11.5 seconds in the lane agility test and 3.1 seconds in the shuttle run test (P = .745 and .346, respectively). Speed and acceleration was measured by the three-quarter court sprint, in which both the study group and the control group averaged 3.3 seconds (P = .516). In the maximum vertical jump, which demonstrates an athlete's jumping ability with a running start, the ACL reconstruction group had an average height of 33.6 inches while the controls averaged 33.9 inches (P = .548). In the standing vertical jump, the ACL reconstruction group averaged 28.2 inches while the control group averaged 29.2 inches (P = .067). In athletes who are able to return to sport and compete at a high level such as the NBA Combine, there is no

  16. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    Science.gov (United States)

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  17. Task-driven image acquisition and reconstruction in cone-beam CT

    International Nuclear Information System (INIS)

    Gang, Grace J; Stayman, J Webster; Siewerdsen, Jeffrey H; Ehtiati, Tina

    2015-01-01

    This work introduces a task-driven imaging framework that incorporates a mathematical definition of the imaging task, a model of the imaging system, and a patient-specific anatomical model to prospectively design image acquisition and reconstruction techniques to optimize task performance. The framework is applied to joint optimization of tube current modulation, view-dependent reconstruction kernel, and orbital tilt in cone-beam CT. The system model considers a cone-beam CT system incorporating a flat-panel detector and 3D filtered backprojection and accurately describes the spatially varying noise and resolution over a wide range of imaging parameters in the presence of a realistic anatomical model. Task-based detectability index (d′) is incorporated as the objective function in a task-driven optimization of image acquisition and reconstruction techniques. The orbital tilt was optimized through an exhaustive search across tilt angles ranging ±30°. For each tilt angle, the view-dependent tube current and reconstruction kernel (i.e. the modulation profiles) that maximized detectability were identified via an alternating optimization. The task-driven approach was compared with conventional unmodulated and automatic exposure control (AEC) strategies for a variety of imaging tasks and anthropomorphic phantoms. The task-driven strategy outperformed the unmodulated and AEC cases for all tasks. For example, d′ for a sphere detection task in a head phantom was improved by 30% compared to the unmodulated case by using smoother kernels for noisy views and distributing mAs across less noisy views (at fixed total mAs) in a manner that was beneficial to task performance. Similarly for detection of a line-pair pattern, the task-driven approach increased d′ by 80% compared to no modulation by means of view-dependent mA and kernel selection that yields modulation transfer function and noise-power spectrum optimal to the task. Optimization of orbital tilt identified the

  18. Maximizing the lightshelf performance by interaction between lightshelf geometries and a curved ceiling

    Energy Technology Data Exchange (ETDEWEB)

    Freewan, Ahmed A. [Jordan University of Science and Technology, Irbid 22110 (Jordan)

    2010-08-15

    The interaction between different lightshelf geometries combined with a curved ceiling was investigated using radiance to maximize the daylight performance of a lightshelf. Two main performance parameters were investigated; illuminance level and distribution uniformity in a large space located in a sub-tropical climate region like Jordan. It was found that a curved lightshelf could improve the daylight level by 10% compared to a horizontal lightshelf. A curved lightshelf help to bounce more daylight deep into a space thus improve the illuminance level and uniformity level. The best lightshelf shapes found are curved and chamfered lightshelves compared to horizontal lightshelves. (author)

  19. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  20. TH-CD-207B-09: Task-Driven Fluence Field Modulation Design for Model-Based Iterative Reconstruction in CT

    International Nuclear Information System (INIS)

    Gang, G; Siewerdsen, J; Stayman, J

    2016-01-01

    Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFM objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task